From debbugs-submit-bounces@debbugs.gnu.org Thu Aug 17 16:36:24 2017 Received: (at submit) by debbugs.gnu.org; 17 Aug 2017 20:36:25 +0000 Received: from localhost ([127.0.0.1]:43189 helo=debbugs.gnu.org) by debbugs.gnu.org with esmtp (Exim 4.84_2) (envelope-from ) id 1diRWi-0001gZ-Rq for submit@debbugs.gnu.org; Thu, 17 Aug 2017 16:36:24 -0400 Received: from eggs.gnu.org ([208.118.235.92]:50135) by debbugs.gnu.org with esmtp (Exim 4.84_2) (envelope-from ) id 1diRWg-0001gI-OC for submit@debbugs.gnu.org; Thu, 17 Aug 2017 16:36:22 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1diRWL-0004qZ-Si for submit@debbugs.gnu.org; Thu, 17 Aug 2017 16:36:17 -0400 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on eggs.gnu.org X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=BAYES_50,FREEMAIL_FROM, T_DKIM_INVALID autolearn=disabled version=3.3.2 Received: from lists.gnu.org ([2001:4830:134:3::11]:56943) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1diRWL-0004qP-MO for submit@debbugs.gnu.org; Thu, 17 Aug 2017 16:36:01 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41000) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1diRW6-0006pW-W1 for guix-patches@gnu.org; Thu, 17 Aug 2017 16:36:01 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1diRVr-0003gm-Ov for guix-patches@gnu.org; Thu, 17 Aug 2017 16:35:46 -0400 Received: from mail-lf0-x230.google.com ([2a00:1450:4010:c07::230]:33053) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1diRVq-0003fP-MT for guix-patches@gnu.org; Thu, 17 Aug 2017 16:35:31 -0400 Received: by mail-lf0-x230.google.com with SMTP id d17so34602779lfe.0 for ; Thu, 17 Aug 2017 13:35:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:cc:date:message-id:user-agent:mime-version; bh=Urbn0yo4vvmmHAK6JWV+xY9mkkmxuZP9xYeixfHDFg0=; b=TCzeNAv9A7Qk+rq5N92Ph9RPCaGF3NnhGZfF6KllMmzh5QjaG/M4bIKfyifw8A/oWL 2bz58b7kQmMMd+BWnfGttWXyCCLFBiierQwxFB1xsWaQDyX14F+EoU5KW/yl9SI8Odvo JNoVGL/lHwfkIgIyMVXZ7zBbRATWqgHsuvR0pISlKkBE68enQ39RVu163BUapRK8pDkc DVipzYRdqRZiYQL4kaZCDLSadPINdz8xWYteMsZ936OeYN46u08zazeJ2QiKTuOJAnJZ el+BO3HJ/bNBA8TmNcb10uUZrGTZKbV6SIxXnI2C/DTy/3nWN3hhjpJjXLAsry+URIQ2 LgWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:cc:date:message-id:user-agent :mime-version; bh=Urbn0yo4vvmmHAK6JWV+xY9mkkmxuZP9xYeixfHDFg0=; b=fplq/o7g9Fq1+Lz8WxNdoeGJbM6N9k5BKaOPYbNrgHJxKgqSSmMds8zVeOOur5d1Du zH6bP5II0Bhb1TpPdmFH4hd5k1MLuIOATBabIKcce+W1F2144eCS4MwaJJDtLMPcsPyX 3gG18wQ2J+uUMxNcxCFKRTXU4XfC9LP+iXt6tzsvBgNn5fiD/kSZ1YUEqU/eM4RUal8O q0eKIPFLRQPPXVGnPXsmpNG/sDxCLfhyZMszm9t8oFnCLebRqTfa4RIP57j6bQmAL7zj FcTFAbm5e7pZZEm98e3wjNm7siL/L0SYchjeY6dg7KnB/Rmf42QvKj/c/h0KO85d/ZiU inlQ== X-Gm-Message-State: AHYfb5iXsPCr97BD6/0GNYw9rP34XadSO5qto2FQ6wtrLu1uxKp9CSNz 6g3EZ7hYM34Y0Q== X-Received: by 10.25.156.205 with SMTP id f196mr2046359lfe.78.1503002126302; Thu, 17 Aug 2017 13:35:26 -0700 (PDT) Received: from magnolia ([178.71.233.26]) by smtp.gmail.com with ESMTPSA id c23sm267053lfh.79.2017.08.17.13.35.24 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 17 Aug 2017 13:35:24 -0700 (PDT) From: Oleg Pykhalov To: guix-patches@gnu.org Subject: [PATCH 0/1] gnu: python-internetarchive: Update to 1.7.1. Date: Thu, 17 Aug 2017 23:35:23 +0300 Message-ID: <877ey2aq5w.fsf@gmail.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.6.x X-Received-From: 2001:4830:134:3::11 X-Debbugs-Envelope-To: submit Cc: Danny Milosavljevic X-BeenThere: debbugs-submit@debbugs.gnu.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: debbugs-submit-bounces@debbugs.gnu.org Sender: "Debbugs-submit" https://debbugs.gnu.org/cgi/bugreport.cgi?bug=27699 Danny Milosavljevic writes: > After I fixed up the test invocation, still 11 tests of 105 fail, > apparently mostly because the Requests mock doesn't work. Could you > take a look? > The mocking is done in tests/conftest.py in internetarchive-1.6.0. 11 failed, whose (maybe) all require internet connections. When Guix build a package he has no networking inside chroot, has it? So, we cannot pass those tests. Could we just disable them selectively (not all 105)? Thanks. --8<---------------cut here---------------start------------->8--- starting phase `check' ============================= test session starts ============================== platform linux -- Python 3.5.3, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: /tmp/guix-build-python-internetarchive-1.7.1.drv-0/internetarchive-1.7.1, inifile: setup.cfg plugins: hypothesis-3.1.0, capturelog-0.7 collected 105 items tests/test_api.py ......F............. tests/test_bad_data.py . tests/test_config.py ......... tests/test_exceptions.py . tests/test_item.py ............................. tests/test_session.py ... tests/test_utils.py ......... tests/cli/test_argparser.py .. tests/cli/test_ia.py F tests/cli/test_ia_download.py FFFFFFFFF tests/cli/test_ia_list.py ........ tests/cli/test_ia_metadata.py ... tests/cli/test_ia_search.py .. tests/cli/test_ia_upload.py ........ =================================== FAILURES =================================== __________________________ test_get_item_with_kwargs ___________________________ self = def _new_conn(self): """ Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ extra_kw = {} if self.source_address: extra_kw['source_address'] = self.source_address if self.socket_options: extra_kw['socket_options'] = self.socket_options try: conn = connection.create_connection( > (self.host, self.port), self.timeout, **extra_kw) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/connection.py:141: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('archive.org', 443), timeout = 1e-13, source_address = None socket_options = [(6, 1, 1)] def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, source_address=None, socket_options=None): """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith('['): host = host.strip('[]') err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/util/connection.py:60: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ host = 'archive.org', port = 443, family = type = , proto = 0, flags = 0 def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): """Resolve host and port into list of address info entries. Translate the host/port argument into a sequence of 5-tuples that contain all the necessary arguments for creating a socket connected to that service. host is a domain name, a string representation of an IPv4/v6 address or None. port is a string service name such as 'http', a numeric port number or None. By passing None as the value of host and port, you can pass NULL to the underlying C API. The family, type and proto arguments can be optionally specified in order to narrow the list of addresses returned. Passing zero as a value for each of these arguments selects the full range of results. """ # We override this function since we want to translate the numeric family # and socket type values to enum constants. addrlist = [] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): E socket.gaierror: [Errno -2] Name or service not known /gnu/store/3aw9x28la9nh8fzkm665d7fywxzbl15j-python-3.5.3/lib/python3.5/socket.py:733: gaierror During handling of the above exception, another exception occurred: self = method = 'GET', url = '/metadata/nasa', body = None headers = {'Connection': 'keep-alive', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'User-Agent': 'internetarchive/1.7.1 (Linux ; N; en; None) Python/3.5.3'} retries = Retry(total=0, connect=0, read=3, redirect=0), redirect = False assert_same_host = False timeout = pool_timeout = None, release_conn = False, chunked = False, body_pos = None response_kw = {'decode_content': False, 'preload_content': False}, conn = None release_this_conn = True, err = None, clean_exit = False timeout_obj = is_new_proxy_conn = False def urlopen(self, method, url, body=None, headers=None, retries=None, redirect=True, assert_same_host=True, timeout=_Default, pool_timeout=None, release_conn=None, chunked=False, body_pos=None, **response_kw): """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method provided by :class:`.RequestMethods`, such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param body: Data to send in the request body (useful for creating POST requests, see HTTPConnectionPool.post_url for more convenience). :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. Pass ``None`` to retry until you receive a response. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When False, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``response_kw.get('preload_content', True)``. :param chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. :param \\**response_kw: Additional parameters are passed to :meth:`urllib3.response.HTTPResponse.from_httplib` """ if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = response_kw.get('preload_content', True) # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn # Merge the proxy headers. Only do this in HTTP. We have to copy the # headers dict so we can safely change it without those changes being # reflected in anyone else's copy. if self.scheme == 'http': headers = headers.copy() headers.update(self.proxy_headers) # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) if is_new_proxy_conn: self._prepare_proxy(conn) # Make the request on the httplib connection object. httplib_response = self._make_request(conn, method, url, timeout=timeout_obj, body=body, headers=headers, > chunked=chunked) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py:600: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = conn = method = 'GET', url = '/metadata/nasa' timeout = chunked = False httplib_request_kw = {'body': None, 'headers': {'Connection': 'keep-alive', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'User-Agent': 'internetarchive/1.7.1 (Linux ; N; en; None) Python/3.5.3'}} timeout_obj = def _make_request(self, conn, method, url, timeout=_Default, chunked=False, **httplib_request_kw): """ Perform a request on a given urllib connection object taken from our pool. :param conn: a connection from one of our connection pools :param timeout: Socket timeout in seconds for the request. This can be a float or integer, which will set the same timeout value for the socket connect and the socket read, or an instance of :class:`urllib3.util.Timeout`, which gives you more fine-grained control over your timeouts. """ self.num_requests += 1 timeout_obj = self._get_timeout(timeout) timeout_obj.start_connect() conn.timeout = timeout_obj.connect_timeout # Trigger any extra validation we need to do. try: > self._validate_conn(conn) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py:345: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = conn = def _validate_conn(self, conn): """ Called right before a request is made, after the socket is created. """ super(HTTPSConnectionPool, self)._validate_conn(conn) # Force connect early to allow us to validate the connection. if not getattr(conn, 'sock', None): # AppEngine might not have `.sock` > conn.connect() /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py:844: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def connect(self): # Add certificate verification > conn = self._new_conn() /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/connection.py:284: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self): """ Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ extra_kw = {} if self.source_address: extra_kw['source_address'] = self.source_address if self.socket_options: extra_kw['socket_options'] = self.socket_options try: conn = connection.create_connection( (self.host, self.port), self.timeout, **extra_kw) except SocketTimeout as e: raise ConnectTimeoutError( self, "Connection to %s timed out. (connect timeout=%s)" % (self.host, self.timeout)) except SocketError as e: raise NewConnectionError( > self, "Failed to establish a new connection: %s" % e) E requests.packages.urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno -2] Name or service not known /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError During handling of the above exception, another exception occurred: self = request = , stream = False timeout = verify = True, cert = None, proxies = OrderedDict() def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple :param verify: (optional) Whether to verify SSL certificates. :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ conn = self.get_connection(request.url, proxies) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers(request) chunked = not (request.body is None or 'Content-Length' in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError as e: # this may raise a string formatting error. err = ("Invalid timeout {0}. Pass a (connect, read) " "timeout tuple, or a single float to set " "both timeouts to the same value".format(timeout)) raise ValueError(err) else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: if not chunked: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, > timeout=timeout ) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/adapters.py:423: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = method = 'GET', url = '/metadata/nasa', body = None headers = {'Connection': 'keep-alive', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'User-Agent': 'internetarchive/1.7.1 (Linux ; N; en; None) Python/3.5.3'} retries = Retry(total=2, connect=2, read=3, redirect=0), redirect = False assert_same_host = False timeout = pool_timeout = None, release_conn = False, chunked = False, body_pos = None response_kw = {'decode_content': False, 'preload_content': False}, conn = None release_this_conn = True err = NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',) clean_exit = False timeout_obj = is_new_proxy_conn = False def urlopen(self, method, url, body=None, headers=None, retries=None, redirect=True, assert_same_host=True, timeout=_Default, pool_timeout=None, release_conn=None, chunked=False, body_pos=None, **response_kw): """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method provided by :class:`.RequestMethods`, such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param body: Data to send in the request body (useful for creating POST requests, see HTTPConnectionPool.post_url for more convenience). :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. Pass ``None`` to retry until you receive a response. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When False, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``response_kw.get('preload_content', True)``. :param chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. :param \\**response_kw: Additional parameters are passed to :meth:`urllib3.response.HTTPResponse.from_httplib` """ if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = response_kw.get('preload_content', True) # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn # Merge the proxy headers. Only do this in HTTP. We have to copy the # headers dict so we can safely change it without those changes being # reflected in anyone else's copy. if self.scheme == 'http': headers = headers.copy() headers.update(self.proxy_headers) # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) if is_new_proxy_conn: self._prepare_proxy(conn) # Make the request on the httplib connection object. httplib_response = self._make_request(conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked) # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Pass method to Response for length checking response_kw['request_method'] = method # Import httplib's response into our own wrapper object response = self.ResponseCls.from_httplib(httplib_response, pool=self, connection=response_conn, retries=retries, **response_kw) # Everything went great! clean_exit = True except queue.Empty: # Timed out by queue. raise EmptyPoolError(self, "No pool connections are available.") except (BaseSSLError, CertificateError) as e: # Close the connection. If a connection is reused on which there # was a Certificate error, the next request will certainly raise # another Certificate error. clean_exit = False raise SSLError(e) except SSLError: # Treat SSLError separately from BaseSSLError to preserve # traceback. clean_exit = False raise except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: # Discard the connection for these exceptions. It will be # be replaced during the next _get_conn() call. clean_exit = False if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: e = ProxyError('Cannot connect to proxy.', e) elif isinstance(e, (SocketError, HTTPException)): e = ProtocolError('Connection aborted.', e) retries = retries.increment(method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]) retries.sleep() # Keep track of the error for the retry warning. err = e finally: if not clean_exit: # We hit some kind of exception, handled or otherwise. We need # to throw the connection away unless explicitly told not to. # Close the connection, set the variable to None, and make sure # we put the None back in the pool to avoid leaking it. conn = conn and conn.close() release_this_conn = True if release_this_conn: # Put the connection back to be reused. If the connection is # expired then it will be None, which will get replaced with a # fresh connection during _get_conn. self._put_conn(conn) if not conn: # Try again log.warning("Retrying (%r) after connection " "broken by '%r': %s", retries, err, url) return self.urlopen(method, url, body, headers, retries, redirect, assert_same_host, timeout=timeout, pool_timeout=pool_timeout, release_conn=release_conn, body_pos=body_pos, > **response_kw) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py:678: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = method = 'GET', url = '/metadata/nasa', body = None headers = {'Connection': 'keep-alive', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'User-Agent': 'internetarchive/1.7.1 (Linux ; N; en; None) Python/3.5.3'} retries = Retry(total=1, connect=1, read=3, redirect=0), redirect = False assert_same_host = False timeout = pool_timeout = None, release_conn = False, chunked = False, body_pos = None response_kw = {'decode_content': False, 'preload_content': False}, conn = None release_this_conn = True err = NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',) clean_exit = False timeout_obj = is_new_proxy_conn = False def urlopen(self, method, url, body=None, headers=None, retries=None, redirect=True, assert_same_host=True, timeout=_Default, pool_timeout=None, release_conn=None, chunked=False, body_pos=None, **response_kw): """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method provided by :class:`.RequestMethods`, such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param body: Data to send in the request body (useful for creating POST requests, see HTTPConnectionPool.post_url for more convenience). :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. Pass ``None`` to retry until you receive a response. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When False, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``response_kw.get('preload_content', True)``. :param chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. :param \\**response_kw: Additional parameters are passed to :meth:`urllib3.response.HTTPResponse.from_httplib` """ if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = response_kw.get('preload_content', True) # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn # Merge the proxy headers. Only do this in HTTP. We have to copy the # headers dict so we can safely change it without those changes being # reflected in anyone else's copy. if self.scheme == 'http': headers = headers.copy() headers.update(self.proxy_headers) # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) if is_new_proxy_conn: self._prepare_proxy(conn) # Make the request on the httplib connection object. httplib_response = self._make_request(conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked) # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Pass method to Response for length checking response_kw['request_method'] = method # Import httplib's response into our own wrapper object response = self.ResponseCls.from_httplib(httplib_response, pool=self, connection=response_conn, retries=retries, **response_kw) # Everything went great! clean_exit = True except queue.Empty: # Timed out by queue. raise EmptyPoolError(self, "No pool connections are available.") except (BaseSSLError, CertificateError) as e: # Close the connection. If a connection is reused on which there # was a Certificate error, the next request will certainly raise # another Certificate error. clean_exit = False raise SSLError(e) except SSLError: # Treat SSLError separately from BaseSSLError to preserve # traceback. clean_exit = False raise except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: # Discard the connection for these exceptions. It will be # be replaced during the next _get_conn() call. clean_exit = False if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: e = ProxyError('Cannot connect to proxy.', e) elif isinstance(e, (SocketError, HTTPException)): e = ProtocolError('Connection aborted.', e) retries = retries.increment(method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]) retries.sleep() # Keep track of the error for the retry warning. err = e finally: if not clean_exit: # We hit some kind of exception, handled or otherwise. We need # to throw the connection away unless explicitly told not to. # Close the connection, set the variable to None, and make sure # we put the None back in the pool to avoid leaking it. conn = conn and conn.close() release_this_conn = True if release_this_conn: # Put the connection back to be reused. If the connection is # expired then it will be None, which will get replaced with a # fresh connection during _get_conn. self._put_conn(conn) if not conn: # Try again log.warning("Retrying (%r) after connection " "broken by '%r': %s", retries, err, url) return self.urlopen(method, url, body, headers, retries, redirect, assert_same_host, timeout=timeout, pool_timeout=pool_timeout, release_conn=release_conn, body_pos=body_pos, > **response_kw) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py:678: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = method = 'GET', url = '/metadata/nasa', body = None headers = {'Connection': 'keep-alive', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'User-Agent': 'internetarchive/1.7.1 (Linux ; N; en; None) Python/3.5.3'} retries = Retry(total=0, connect=0, read=3, redirect=0), redirect = False assert_same_host = False timeout = pool_timeout = None, release_conn = False, chunked = False, body_pos = None response_kw = {'decode_content': False, 'preload_content': False}, conn = None release_this_conn = True err = NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',) clean_exit = False timeout_obj = is_new_proxy_conn = False def urlopen(self, method, url, body=None, headers=None, retries=None, redirect=True, assert_same_host=True, timeout=_Default, pool_timeout=None, release_conn=None, chunked=False, body_pos=None, **response_kw): """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method provided by :class:`.RequestMethods`, such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param body: Data to send in the request body (useful for creating POST requests, see HTTPConnectionPool.post_url for more convenience). :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. Pass ``None`` to retry until you receive a response. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When False, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``response_kw.get('preload_content', True)``. :param chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. :param \\**response_kw: Additional parameters are passed to :meth:`urllib3.response.HTTPResponse.from_httplib` """ if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = response_kw.get('preload_content', True) # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn # Merge the proxy headers. Only do this in HTTP. We have to copy the # headers dict so we can safely change it without those changes being # reflected in anyone else's copy. if self.scheme == 'http': headers = headers.copy() headers.update(self.proxy_headers) # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) if is_new_proxy_conn: self._prepare_proxy(conn) # Make the request on the httplib connection object. httplib_response = self._make_request(conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked) # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Pass method to Response for length checking response_kw['request_method'] = method # Import httplib's response into our own wrapper object response = self.ResponseCls.from_httplib(httplib_response, pool=self, connection=response_conn, retries=retries, **response_kw) # Everything went great! clean_exit = True except queue.Empty: # Timed out by queue. raise EmptyPoolError(self, "No pool connections are available.") except (BaseSSLError, CertificateError) as e: # Close the connection. If a connection is reused on which there # was a Certificate error, the next request will certainly raise # another Certificate error. clean_exit = False raise SSLError(e) except SSLError: # Treat SSLError separately from BaseSSLError to preserve # traceback. clean_exit = False raise except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: # Discard the connection for these exceptions. It will be # be replaced during the next _get_conn() call. clean_exit = False if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: e = ProxyError('Cannot connect to proxy.', e) elif isinstance(e, (SocketError, HTTPException)): e = ProtocolError('Connection aborted.', e) retries = retries.increment(method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]) retries.sleep() # Keep track of the error for the retry warning. err = e finally: if not clean_exit: # We hit some kind of exception, handled or otherwise. We need # to throw the connection away unless explicitly told not to. # Close the connection, set the variable to None, and make sure # we put the None back in the pool to avoid leaking it. conn = conn and conn.close() release_this_conn = True if release_this_conn: # Put the connection back to be reused. If the connection is # expired then it will be None, which will get replaced with a # fresh connection during _get_conn. self._put_conn(conn) if not conn: # Try again log.warning("Retrying (%r) after connection " "broken by '%r': %s", retries, err, url) return self.urlopen(method, url, body, headers, retries, redirect, assert_same_host, timeout=timeout, pool_timeout=pool_timeout, release_conn=release_conn, body_pos=body_pos, > **response_kw) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py:678: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = method = 'GET', url = '/metadata/nasa', body = None headers = {'Connection': 'keep-alive', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'User-Agent': 'internetarchive/1.7.1 (Linux ; N; en; None) Python/3.5.3'} retries = Retry(total=0, connect=0, read=3, redirect=0), redirect = False assert_same_host = False timeout = pool_timeout = None, release_conn = False, chunked = False, body_pos = None response_kw = {'decode_content': False, 'preload_content': False}, conn = None release_this_conn = True, err = None, clean_exit = False timeout_obj = is_new_proxy_conn = False def urlopen(self, method, url, body=None, headers=None, retries=None, redirect=True, assert_same_host=True, timeout=_Default, pool_timeout=None, release_conn=None, chunked=False, body_pos=None, **response_kw): """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method provided by :class:`.RequestMethods`, such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param body: Data to send in the request body (useful for creating POST requests, see HTTPConnectionPool.post_url for more convenience). :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. Pass ``None`` to retry until you receive a response. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When False, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``response_kw.get('preload_content', True)``. :param chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. :param \\**response_kw: Additional parameters are passed to :meth:`urllib3.response.HTTPResponse.from_httplib` """ if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = response_kw.get('preload_content', True) # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn # Merge the proxy headers. Only do this in HTTP. We have to copy the # headers dict so we can safely change it without those changes being # reflected in anyone else's copy. if self.scheme == 'http': headers = headers.copy() headers.update(self.proxy_headers) # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) if is_new_proxy_conn: self._prepare_proxy(conn) # Make the request on the httplib connection object. httplib_response = self._make_request(conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked) # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Pass method to Response for length checking response_kw['request_method'] = method # Import httplib's response into our own wrapper object response = self.ResponseCls.from_httplib(httplib_response, pool=self, connection=response_conn, retries=retries, **response_kw) # Everything went great! clean_exit = True except queue.Empty: # Timed out by queue. raise EmptyPoolError(self, "No pool connections are available.") except (BaseSSLError, CertificateError) as e: # Close the connection. If a connection is reused on which there # was a Certificate error, the next request will certainly raise # another Certificate error. clean_exit = False raise SSLError(e) except SSLError: # Treat SSLError separately from BaseSSLError to preserve # traceback. clean_exit = False raise except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: # Discard the connection for these exceptions. It will be # be replaced during the next _get_conn() call. clean_exit = False if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: e = ProxyError('Cannot connect to proxy.', e) elif isinstance(e, (SocketError, HTTPException)): e = ProtocolError('Connection aborted.', e) retries = retries.increment(method, url, error=e, _pool=self, > _stacktrace=sys.exc_info()[2]) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py:649: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=0, read=3, redirect=0), method = 'GET' url = '/metadata/nasa', response = None error = NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',) _pool = _stacktrace = def increment(self, method=None, url=None, response=None, error=None, _pool=None, _stacktrace=None): """ Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.HTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise six.reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect cause = 'unknown' status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise six.reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or not self._is_method_retryable(method): raise six.reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = 'too many redirects' redirect_location = response.get_redirect_location() status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and a the given method is in the whitelist cause = ResponseError.GENERIC_ERROR if response and response.status: cause = ResponseError.SPECIFIC_ERROR.format( status_code=response.status) status = response.status history = self.history + (RequestHistory(method, url, error, status, redirect_location),) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, history=history) if new_retry.is_exhausted(): > raise MaxRetryError(_pool, url, error or ResponseError(cause)) E requests.packages.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='archive.org', port=443): Max retries exceeded with url: /metadata/nasa (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',)) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError During handling of the above exception, another exception occurred: self = identifier = 'nasa', request_kwargs = {'timeout': 1e-13} def get_metadata(self, identifier, request_kwargs=None): """Get an item's metadata from the `Metadata API `__ :type identifier: str :param identifier: Globally unique Archive.org identifier. :rtype: dict :returns: Metadat API response. """ request_kwargs = {} if not request_kwargs else request_kwargs url = '{0}//archive.org/metadata/{1}'.format(self.protocol, identifier) if 'timeout' not in request_kwargs: request_kwargs['timeout'] = 12 try: > resp = self.get(url, **request_kwargs) ../../../internetarchive-1.7.1/internetarchive/session.py:237: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = url = 'https://archive.org/metadata/nasa' kwargs = {'allow_redirects': True, 'timeout': 1e-13} def get(self, url, **kwargs): """Sends a GET request. Returns :class:`Response` object. :param url: URL for the new :class:`Request` object. :param \*\*kwargs: Optional arguments that ``request`` takes. :rtype: requests.Response """ kwargs.setdefault('allow_redirects', True) > return self.request('GET', url, **kwargs) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/sessions.py:501: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = method = 'GET', url = 'https://archive.org/metadata/nasa', params = None data = None, headers = None, cookies = None, files = None, auth = None timeout = 1e-13, allow_redirects = True, proxies = {}, hooks = None stream = None, verify = None, cert = None, json = None def request(self, method, url, params=None, data=None, headers=None, cookies=None, files=None, auth=None, timeout=None, allow_redirects=True, proxies=None, hooks=None, stream=None, verify=None, cert=None, json=None): """Constructs a :class:`Request `, prepares it and sends it. Returns :class:`Response ` object. :param method: method for the new :class:`Request` object. :param url: URL for the new :class:`Request` object. :param params: (optional) Dictionary or bytes to be sent in the query string for the :class:`Request`. :param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`. :param json: (optional) json to send in the body of the :class:`Request`. :param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`. :param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`. :param files: (optional) Dictionary of ``'filename': file-like-objects`` for multipart encoding upload. :param auth: (optional) Auth tuple or callable to enable Basic/Digest/Custom HTTP Auth. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple :param allow_redirects: (optional) Set to True by default. :type allow_redirects: bool :param proxies: (optional) Dictionary mapping protocol or protocol and hostname to the URL of the proxy. :param stream: (optional) whether to immediately download the response content. Defaults to ``False``. :param verify: (optional) whether the SSL cert will be verified. A CA_BUNDLE path can also be provided. Defaults to ``True``. :param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair. :rtype: requests.Response """ # Create the Request. req = Request( method = method.upper(), url = url, headers = headers, files = files, data = data or {}, json = json, params = params or {}, auth = auth, cookies = cookies, hooks = hooks, ) prep = self.prepare_request(req) proxies = proxies or {} settings = self.merge_environment_settings( prep.url, proxies, stream, verify, cert ) # Send the request. send_kwargs = { 'timeout': timeout, 'allow_redirects': allow_redirects, } send_kwargs.update(settings) > resp = self.send(prep, **send_kwargs) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/sessions.py:488: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = kwargs = {'allow_redirects': True, 'cert': None, 'proxies': OrderedDict(), 'stream': False, ...} insecure = False, w = [] def send(self, request, **kwargs): # Catch urllib3 warnings for HTTPS related errors. insecure = False with warnings.catch_warnings(record=True) as w: warnings.filterwarnings('always') > r = super(ArchiveSession, self).send(request, **kwargs) ../../../internetarchive-1.7.1/internetarchive/session.py:353: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = kwargs = {'cert': None, 'proxies': OrderedDict(), 'stream': False, 'timeout': 1e-13, ...} allow_redirects = True, stream = False, hooks = {'response': []} checked_urls = set() adapter = start = datetime.datetime(2017, 8, 17, 20, 18, 13, 579671) def send(self, request, **kwargs): """ Send a given PreparedRequest. :rtype: requests.Response """ # Set defaults that the hooks can utilize to ensure they always have # the correct parameters to reproduce the previous request. kwargs.setdefault('stream', self.stream) kwargs.setdefault('verify', self.verify) kwargs.setdefault('cert', self.cert) kwargs.setdefault('proxies', self.proxies) # It's possible that users might accidentally send a Request object. # Guard against that specific failure case. if isinstance(request, Request): raise ValueError('You can only send PreparedRequests.') # Set up variables needed for resolve_redirects and dispatching of hooks allow_redirects = kwargs.pop('allow_redirects', True) stream = kwargs.get('stream') hooks = request.hooks # Resolve URL in redirect cache, if available. if allow_redirects: checked_urls = set() while request.url in self.redirect_cache: checked_urls.add(request.url) new_url = self.redirect_cache.get(request.url) if new_url in checked_urls: break request.url = new_url # Get the appropriate adapter to use adapter = self.get_adapter(url=request.url) # Start time (approximately) of the request start = datetime.utcnow() # Send the request > r = adapter.send(request, **kwargs) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/sessions.py:609: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = verify = True, cert = None, proxies = OrderedDict() def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple :param verify: (optional) Whether to verify SSL certificates. :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ conn = self.get_connection(request.url, proxies) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers(request) chunked = not (request.body is None or 'Content-Length' in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError as e: # this may raise a string formatting error. err = ("Invalid timeout {0}. Pass a (connect, read) " "timeout tuple, or a single float to set " "both timeouts to the same value".format(timeout)) raise ValueError(err) else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: if not chunked: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout ) # Send the request. else: if hasattr(conn, 'proxy_pool'): conn = conn.proxy_pool low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) try: low_conn.putrequest(request.method, url, skip_accept_encoding=True) for header, value in request.headers.items(): low_conn.putheader(header, value) low_conn.endheaders() for i in request.body: low_conn.send(hex(len(i))[2:].encode('utf-8')) low_conn.send(b'\r\n') low_conn.send(i) low_conn.send(b'\r\n') low_conn.send(b'0\r\n\r\n') # Receive the response from the server try: # For Python 2.7+ versions, use buffering of HTTP # responses r = low_conn.getresponse(buffering=True) except TypeError: # For compatibility with Python 2.6 versions and back r = low_conn.getresponse() resp = HTTPResponse.from_httplib( r, pool=conn, connection=low_conn, preload_content=False, decode_content=False ) except: # If we hit any problems here, clean up the connection. # Then, reraise so that we can handle the actual exception. low_conn.close() raise except (ProtocolError, socket.error) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPSConnectionPool(host='archive.org', port=443): Max retries exceeded with url: /metadata/nasa (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',)) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/adapters.py:487: ConnectionError During handling of the above exception, another exception occurred: def test_get_item_with_kwargs(): with IaRequestsMock(assert_all_requests_are_fired=False) as rsps: rsps.add_metadata_mock('nasa') item = get_item('nasa', http_adapter_kwargs={'max_retries': 13}) assert isinstance(item.session.adapters['{0}//'.format(PROTOCOL)].max_retries, urllib3.Retry) try: > get_item('nasa', request_kwargs={'timeout': .0000000000001}) ../../../internetarchive-1.7.1/tests/test_api.py:74: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ identifier = 'nasa', config = None, config_file = None archive_session = debug = None, http_adapter_kwargs = None, request_kwargs = {'timeout': 1e-13} def get_item(identifier, config=None, config_file=None, archive_session=None, debug=None, http_adapter_kwargs=None, request_kwargs=None): """Get an :class:`Item` object. :type identifier: str :param identifier: The globally unique Archive.org item identifier. :type config: dict :param config: (optional) A dictionary used to configure your session. :type config_file: str :param config_file: (optional) A path to a config file used to configure your session. :type archive_session: :class:`ArchiveSession` :param archive_session: (optional) An :class:`ArchiveSession` object can be provided via the ``archive_session`` parameter. :type http_adapter_kwargs: dict :param http_adapter_kwargs: (optional) Keyword arguments that :py:class:`requests.adapters.HTTPAdapter` takes. :type request_kwargs: dict :param request_kwargs: (optional) Keyword arguments that :py:class:`requests.Request` takes. Usage: >>> from internetarchive import get_item >>> item = get_item('nasa') >>> item.item_size 121084 """ if not archive_session: archive_session = get_session(config, config_file, debug, http_adapter_kwargs) > return archive_session.get_item(identifier, request_kwargs=request_kwargs) ../../../internetarchive-1.7.1/internetarchive/api.py:116: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = identifier = 'nasa', item_metadata = None, request_kwargs = {'timeout': 1e-13} def get_item(self, identifier, item_metadata=None, request_kwargs=None): """A method for creating :class:`internetarchive.Item ` and :class:`internetarchive.Collection ` objects. :type identifier: str :param identifier: A globally unique Archive.org identifier. :type item_metadata: dict :param item_metadata: (optional) A metadata dict used to initialize the Item or Collection object. Metadata will automatically be retrieved from Archive.org if nothing is provided. :type request_kwargs: dict :param request_kwargs: (optional) Keyword arguments to be used in :meth:`requests.sessions.Session.get` request. """ request_kwargs = {} if not request_kwargs else request_kwargs if not item_metadata: logger.debug('no metadata provided for "{0}", ' 'retrieving now.'.format(identifier)) > item_metadata = self.get_metadata(identifier, request_kwargs) ../../../internetarchive-1.7.1/internetarchive/session.py:214: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = identifier = 'nasa', request_kwargs = {'timeout': 1e-13} def get_metadata(self, identifier, request_kwargs=None): """Get an item's metadata from the `Metadata API `__ :type identifier: str :param identifier: Globally unique Archive.org identifier. :rtype: dict :returns: Metadat API response. """ request_kwargs = {} if not request_kwargs else request_kwargs url = '{0}//archive.org/metadata/{1}'.format(self.protocol, identifier) if 'timeout' not in request_kwargs: request_kwargs['timeout'] = 12 try: resp = self.get(url, **request_kwargs) resp.raise_for_status() except Exception as exc: error_msg = 'Error retrieving metadata from {0}, {1}'.format(url, exc) logger.error(error_msg) > raise type(exc)(error_msg) E requests.exceptions.ConnectionError: Error retrieving metadata from https://archive.org/metadata/nasa, HTTPSConnectionPool(host='archive.org', port=443): Max retries exceeded with url: /metadata/nasa (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',)) ../../../internetarchive-1.7.1/internetarchive/session.py:242: ConnectionError During handling of the above exception, another exception occurred: def test_get_item_with_kwargs(): with IaRequestsMock(assert_all_requests_are_fired=False) as rsps: rsps.add_metadata_mock('nasa') item = get_item('nasa', http_adapter_kwargs={'max_retries': 13}) assert isinstance(item.session.adapters['{0}//'.format(PROTOCOL)].max_retries, urllib3.Retry) try: get_item('nasa', request_kwargs={'timeout': .0000000000001}) except Exception as exc: > assert 'timed out' in str(exc) E assert 'timed out' in "Error retrieving metadata from https://archive.org/metadata/nasa, HTTPSConnectionPool(host='archive.org', port=443): ...PSConnection object at 0x7f9238ef8dd8>: Failed to establish a new connection: [Errno -2] Name or service not known',))" E + where "Error retrieving metadata from https://archive.org/metadata/nasa, HTTPSConnectionPool(host='archive.org', port=443): ...PSConnection object at 0x7f9238ef8dd8>: Failed to establish a new connection: [Errno -2] Name or service not known',))" = str(ConnectionError("Error retrieving metadata from https://archive.org/metadata/nasa, HTTPSConnectionPool(host='archive.o...Connection object at 0x7f9238ef8dd8>: Failed to establish a new connection: [Errno -2] Name or service not known',))",)) ../../../internetarchive-1.7.1/tests/test_api.py:76: AssertionError --------------------------------- Captured log --------------------------------- session.py 213 DEBUG no metadata provided for "nasa", retrieving now. session.py 213 DEBUG no metadata provided for "nasa", retrieving now. connectionpool.py 818 DEBUG Starting new HTTPS connection (1): archive.org retry.py 378 DEBUG Incremented Retry for (url='/metadata/nasa'): Retry(total=2, connect=2, read=3, redirect=0) connectionpool.py 673 WARNING Retrying (Retry(total=2, connect=2, read=3, redirect=0)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',)': /metadata/nasa connectionpool.py 818 DEBUG Starting new HTTPS connection (2): archive.org retry.py 378 DEBUG Incremented Retry for (url='/metadata/nasa'): Retry(total=1, connect=1, read=3, redirect=0) connectionpool.py 673 WARNING Retrying (Retry(total=1, connect=1, read=3, redirect=0)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',)': /metadata/nasa connectionpool.py 818 DEBUG Starting new HTTPS connection (3): archive.org retry.py 378 DEBUG Incremented Retry for (url='/metadata/nasa'): Retry(total=0, connect=0, read=3, redirect=0) connectionpool.py 673 WARNING Retrying (Retry(total=0, connect=0, read=3, redirect=0)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',)': /metadata/nasa connectionpool.py 818 DEBUG Starting new HTTPS connection (4): archive.org session.py 241 ERROR Error retrieving metadata from https://archive.org/metadata/nasa, HTTPSConnectionPool(host='archive.org', port=443): Max retries exceeded with url: /metadata/nasa (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',)) ___________________________________ test_ia ____________________________________ self = def _new_conn(self): """ Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ extra_kw = {} if self.source_address: extra_kw['source_address'] = self.source_address if self.socket_options: extra_kw['socket_options'] = self.socket_options try: conn = connection.create_connection( > (self.host, self.port), self.timeout, **extra_kw) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/connection.py:141: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('archive.org', 80), timeout = 12, source_address = None socket_options = [(6, 1, 1)] def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, source_address=None, socket_options=None): """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith('['): host = host.strip('[]') err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/util/connection.py:60: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ host = 'archive.org', port = 80, family = type = , proto = 0, flags = 0 def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): """Resolve host and port into list of address info entries. Translate the host/port argument into a sequence of 5-tuples that contain all the necessary arguments for creating a socket connected to that service. host is a domain name, a string representation of an IPv4/v6 address or None. port is a string service name such as 'http', a numeric port number or None. By passing None as the value of host and port, you can pass NULL to the underlying C API. The family, type and proto arguments can be optionally specified in order to narrow the list of addresses returned. Passing zero as a value for each of these arguments selects the full range of results. """ # We override this function since we want to translate the numeric family # and socket type values to enum constants. addrlist = [] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): E socket.gaierror: [Errno -2] Name or service not known /gnu/store/3aw9x28la9nh8fzkm665d7fywxzbl15j-python-3.5.3/lib/python3.5/socket.py:733: gaierror During handling of the above exception, another exception occurred: self = method = 'GET', url = '/metadata/nasa', body = None headers = {'Connection': 'keep-alive', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'User-Agent': 'internetarchive/1.7.1 (Linux ; N; en; foo) Python/3.5.3'} retries = Retry(total=0, connect=0, read=3, redirect=0), redirect = False assert_same_host = False timeout = pool_timeout = None, release_conn = False, chunked = False, body_pos = None response_kw = {'decode_content': False, 'preload_content': False}, conn = None release_this_conn = True, err = None, clean_exit = False timeout_obj = is_new_proxy_conn = False def urlopen(self, method, url, body=None, headers=None, retries=None, redirect=True, assert_same_host=True, timeout=_Default, pool_timeout=None, release_conn=None, chunked=False, body_pos=None, **response_kw): """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method provided by :class:`.RequestMethods`, such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param body: Data to send in the request body (useful for creating POST requests, see HTTPConnectionPool.post_url for more convenience). :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. Pass ``None`` to retry until you receive a response. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When False, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``response_kw.get('preload_content', True)``. :param chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. :param \\**response_kw: Additional parameters are passed to :meth:`urllib3.response.HTTPResponse.from_httplib` """ if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = response_kw.get('preload_content', True) # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn # Merge the proxy headers. Only do this in HTTP. We have to copy the # headers dict so we can safely change it without those changes being # reflected in anyone else's copy. if self.scheme == 'http': headers = headers.copy() headers.update(self.proxy_headers) # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) if is_new_proxy_conn: self._prepare_proxy(conn) # Make the request on the httplib connection object. httplib_response = self._make_request(conn, method, url, timeout=timeout_obj, body=body, headers=headers, > chunked=chunked) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py:600: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = conn = method = 'GET', url = '/metadata/nasa' timeout = chunked = False httplib_request_kw = {'body': None, 'headers': {'Connection': 'keep-alive', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'User-Agent': 'internetarchive/1.7.1 (Linux ; N; en; foo) Python/3.5.3'}} timeout_obj = def _make_request(self, conn, method, url, timeout=_Default, chunked=False, **httplib_request_kw): """ Perform a request on a given urllib connection object taken from our pool. :param conn: a connection from one of our connection pools :param timeout: Socket timeout in seconds for the request. This can be a float or integer, which will set the same timeout value for the socket connect and the socket read, or an instance of :class:`urllib3.util.Timeout`, which gives you more fine-grained control over your timeouts. """ self.num_requests += 1 timeout_obj = self._get_timeout(timeout) timeout_obj.start_connect() conn.timeout = timeout_obj.connect_timeout # Trigger any extra validation we need to do. try: self._validate_conn(conn) except (SocketTimeout, BaseSSLError) as e: # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) raise # conn.request() calls httplib.*.request, not the method in # urllib3.request. It also calls makefile (recv) on the socket. if chunked: conn.request_chunked(method, url, **httplib_request_kw) else: > conn.request(method, url, **httplib_request_kw) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py:356: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = method = 'GET', url = '/metadata/nasa', body = None headers = {'Connection': 'keep-alive', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'User-Agent': 'internetarchive/1.7.1 (Linux ; N; en; foo) Python/3.5.3'} def request(self, method, url, body=None, headers={}): """Send a complete request to the server.""" > self._send_request(method, url, body, headers) /gnu/store/3aw9x28la9nh8fzkm665d7fywxzbl15j-python-3.5.3/lib/python3.5/http/client.py:1107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = method = 'GET', url = '/metadata/nasa', body = None headers = {'Connection': 'keep-alive', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'User-Agent': 'internetarchive/1.7.1 (Linux ; N; en; foo) Python/3.5.3'} def _send_request(self, method, url, body, headers): # Honor explicitly requested Host: and Accept-Encoding: headers. header_names = dict.fromkeys([k.lower() for k in headers]) skips = {} if 'host' in header_names: skips['skip_host'] = 1 if 'accept-encoding' in header_names: skips['skip_accept_encoding'] = 1 self.putrequest(method, url, **skips) if 'content-length' not in header_names: self._set_content_length(body, method) for hdr, value in headers.items(): self.putheader(hdr, value) if isinstance(body, str): # RFC 2616 Section 3.7.1 says that text default has a # default charset of iso-8859-1. body = _encode(body, 'body') > self.endheaders(body) /gnu/store/3aw9x28la9nh8fzkm665d7fywxzbl15j-python-3.5.3/lib/python3.5/http/client.py:1152: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = message_body = None def endheaders(self, message_body=None): """Indicate that the last header line has been sent to the server. This method sends the request to the server. The optional message_body argument can be used to pass a message body associated with the request. The message body will be sent in the same packet as the message headers if it is a string, otherwise it is sent as a separate packet. """ if self.__state == _CS_REQ_STARTED: self.__state = _CS_REQ_SENT else: raise CannotSendHeader() > self._send_output(message_body) /gnu/store/3aw9x28la9nh8fzkm665d7fywxzbl15j-python-3.5.3/lib/python3.5/http/client.py:1103: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = message_body = None def _send_output(self, message_body=None): """Send the currently buffered request and clear the buffer. Appends an extra \\r\\n to the buffer. A message_body may be specified, to be appended to the request. """ self._buffer.extend((b"", b"")) msg = b"\r\n".join(self._buffer) del self._buffer[:] > self.send(msg) /gnu/store/3aw9x28la9nh8fzkm665d7fywxzbl15j-python-3.5.3/lib/python3.5/http/client.py:934: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = data = b'GET /metadata/nasa HTTP/1.1\r\nHost: archive.org\r\nConnection: keep-alive\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nUser-Agent: internetarchive/1.7.1 (Linux ; N; en; foo) Python/3.5.3\r\n\r\n' def send(self, data): """Send `data' to the server. ``data`` can be a string object, a bytes object, an array object, a file-like object that supports a .read() method, or an iterable object. """ if self.sock is None: if self.auto_open: > self.connect() /gnu/store/3aw9x28la9nh8fzkm665d7fywxzbl15j-python-3.5.3/lib/python3.5/http/client.py:877: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def connect(self): > conn = self._new_conn() /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/connection.py:166: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self): """ Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ extra_kw = {} if self.source_address: extra_kw['source_address'] = self.source_address if self.socket_options: extra_kw['socket_options'] = self.socket_options try: conn = connection.create_connection( (self.host, self.port), self.timeout, **extra_kw) except SocketTimeout as e: raise ConnectTimeoutError( self, "Connection to %s timed out. (connect timeout=%s)" % (self.host, self.timeout)) except SocketError as e: raise NewConnectionError( > self, "Failed to establish a new connection: %s" % e) E requests.packages.urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno -2] Name or service not known /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError During handling of the above exception, another exception occurred: self = request = , stream = False timeout = verify = True, cert = None, proxies = OrderedDict() def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple :param verify: (optional) Whether to verify SSL certificates. :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ conn = self.get_connection(request.url, proxies) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers(request) chunked = not (request.body is None or 'Content-Length' in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError as e: # this may raise a string formatting error. err = ("Invalid timeout {0}. Pass a (connect, read) " "timeout tuple, or a single float to set " "both timeouts to the same value".format(timeout)) raise ValueError(err) else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: if not chunked: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, > timeout=timeout ) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/adapters.py:423: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = method = 'GET', url = '/metadata/nasa', body = None headers = {'Connection': 'keep-alive', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'User-Agent': 'internetarchive/1.7.1 (Linux ; N; en; foo) Python/3.5.3'} retries = Retry(total=2, connect=2, read=3, redirect=0), redirect = False assert_same_host = False timeout = pool_timeout = None, release_conn = False, chunked = False, body_pos = None response_kw = {'decode_content': False, 'preload_content': False}, conn = None release_this_conn = True err = NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',) clean_exit = False timeout_obj = is_new_proxy_conn = False def urlopen(self, method, url, body=None, headers=None, retries=None, redirect=True, assert_same_host=True, timeout=_Default, pool_timeout=None, release_conn=None, chunked=False, body_pos=None, **response_kw): """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method provided by :class:`.RequestMethods`, such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param body: Data to send in the request body (useful for creating POST requests, see HTTPConnectionPool.post_url for more convenience). :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. Pass ``None`` to retry until you receive a response. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When False, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``response_kw.get('preload_content', True)``. :param chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. :param \\**response_kw: Additional parameters are passed to :meth:`urllib3.response.HTTPResponse.from_httplib` """ if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = response_kw.get('preload_content', True) # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn # Merge the proxy headers. Only do this in HTTP. We have to copy the # headers dict so we can safely change it without those changes being # reflected in anyone else's copy. if self.scheme == 'http': headers = headers.copy() headers.update(self.proxy_headers) # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) if is_new_proxy_conn: self._prepare_proxy(conn) # Make the request on the httplib connection object. httplib_response = self._make_request(conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked) # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Pass method to Response for length checking response_kw['request_method'] = method # Import httplib's response into our own wrapper object response = self.ResponseCls.from_httplib(httplib_response, pool=self, connection=response_conn, retries=retries, **response_kw) # Everything went great! clean_exit = True except queue.Empty: # Timed out by queue. raise EmptyPoolError(self, "No pool connections are available.") except (BaseSSLError, CertificateError) as e: # Close the connection. If a connection is reused on which there # was a Certificate error, the next request will certainly raise # another Certificate error. clean_exit = False raise SSLError(e) except SSLError: # Treat SSLError separately from BaseSSLError to preserve # traceback. clean_exit = False raise except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: # Discard the connection for these exceptions. It will be # be replaced during the next _get_conn() call. clean_exit = False if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: e = ProxyError('Cannot connect to proxy.', e) elif isinstance(e, (SocketError, HTTPException)): e = ProtocolError('Connection aborted.', e) retries = retries.increment(method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]) retries.sleep() # Keep track of the error for the retry warning. err = e finally: if not clean_exit: # We hit some kind of exception, handled or otherwise. We need # to throw the connection away unless explicitly told not to. # Close the connection, set the variable to None, and make sure # we put the None back in the pool to avoid leaking it. conn = conn and conn.close() release_this_conn = True if release_this_conn: # Put the connection back to be reused. If the connection is # expired then it will be None, which will get replaced with a # fresh connection during _get_conn. self._put_conn(conn) if not conn: # Try again log.warning("Retrying (%r) after connection " "broken by '%r': %s", retries, err, url) return self.urlopen(method, url, body, headers, retries, redirect, assert_same_host, timeout=timeout, pool_timeout=pool_timeout, release_conn=release_conn, body_pos=body_pos, > **response_kw) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py:678: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = method = 'GET', url = '/metadata/nasa', body = None headers = {'Connection': 'keep-alive', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'User-Agent': 'internetarchive/1.7.1 (Linux ; N; en; foo) Python/3.5.3'} retries = Retry(total=1, connect=1, read=3, redirect=0), redirect = False assert_same_host = False timeout = pool_timeout = None, release_conn = False, chunked = False, body_pos = None response_kw = {'decode_content': False, 'preload_content': False}, conn = None release_this_conn = True err = NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',) clean_exit = False timeout_obj = is_new_proxy_conn = False def urlopen(self, method, url, body=None, headers=None, retries=None, redirect=True, assert_same_host=True, timeout=_Default, pool_timeout=None, release_conn=None, chunked=False, body_pos=None, **response_kw): """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method provided by :class:`.RequestMethods`, such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param body: Data to send in the request body (useful for creating POST requests, see HTTPConnectionPool.post_url for more convenience). :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. Pass ``None`` to retry until you receive a response. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When False, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``response_kw.get('preload_content', True)``. :param chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. :param \\**response_kw: Additional parameters are passed to :meth:`urllib3.response.HTTPResponse.from_httplib` """ if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = response_kw.get('preload_content', True) # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn # Merge the proxy headers. Only do this in HTTP. We have to copy the # headers dict so we can safely change it without those changes being # reflected in anyone else's copy. if self.scheme == 'http': headers = headers.copy() headers.update(self.proxy_headers) # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) if is_new_proxy_conn: self._prepare_proxy(conn) # Make the request on the httplib connection object. httplib_response = self._make_request(conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked) # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Pass method to Response for length checking response_kw['request_method'] = method # Import httplib's response into our own wrapper object response = self.ResponseCls.from_httplib(httplib_response, pool=self, connection=response_conn, retries=retries, **response_kw) # Everything went great! clean_exit = True except queue.Empty: # Timed out by queue. raise EmptyPoolError(self, "No pool connections are available.") except (BaseSSLError, CertificateError) as e: # Close the connection. If a connection is reused on which there # was a Certificate error, the next request will certainly raise # another Certificate error. clean_exit = False raise SSLError(e) except SSLError: # Treat SSLError separately from BaseSSLError to preserve # traceback. clean_exit = False raise except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: # Discard the connection for these exceptions. It will be # be replaced during the next _get_conn() call. clean_exit = False if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: e = ProxyError('Cannot connect to proxy.', e) elif isinstance(e, (SocketError, HTTPException)): e = ProtocolError('Connection aborted.', e) retries = retries.increment(method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]) retries.sleep() # Keep track of the error for the retry warning. err = e finally: if not clean_exit: # We hit some kind of exception, handled or otherwise. We need # to throw the connection away unless explicitly told not to. # Close the connection, set the variable to None, and make sure # we put the None back in the pool to avoid leaking it. conn = conn and conn.close() release_this_conn = True if release_this_conn: # Put the connection back to be reused. If the connection is # expired then it will be None, which will get replaced with a # fresh connection during _get_conn. self._put_conn(conn) if not conn: # Try again log.warning("Retrying (%r) after connection " "broken by '%r': %s", retries, err, url) return self.urlopen(method, url, body, headers, retries, redirect, assert_same_host, timeout=timeout, pool_timeout=pool_timeout, release_conn=release_conn, body_pos=body_pos, > **response_kw) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py:678: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = method = 'GET', url = '/metadata/nasa', body = None headers = {'Connection': 'keep-alive', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'User-Agent': 'internetarchive/1.7.1 (Linux ; N; en; foo) Python/3.5.3'} retries = Retry(total=0, connect=0, read=3, redirect=0), redirect = False assert_same_host = False timeout = pool_timeout = None, release_conn = False, chunked = False, body_pos = None response_kw = {'decode_content': False, 'preload_content': False}, conn = None release_this_conn = True err = NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',) clean_exit = False timeout_obj = is_new_proxy_conn = False def urlopen(self, method, url, body=None, headers=None, retries=None, redirect=True, assert_same_host=True, timeout=_Default, pool_timeout=None, release_conn=None, chunked=False, body_pos=None, **response_kw): """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method provided by :class:`.RequestMethods`, such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param body: Data to send in the request body (useful for creating POST requests, see HTTPConnectionPool.post_url for more convenience). :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. Pass ``None`` to retry until you receive a response. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When False, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``response_kw.get('preload_content', True)``. :param chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. :param \\**response_kw: Additional parameters are passed to :meth:`urllib3.response.HTTPResponse.from_httplib` """ if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = response_kw.get('preload_content', True) # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn # Merge the proxy headers. Only do this in HTTP. We have to copy the # headers dict so we can safely change it without those changes being # reflected in anyone else's copy. if self.scheme == 'http': headers = headers.copy() headers.update(self.proxy_headers) # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) if is_new_proxy_conn: self._prepare_proxy(conn) # Make the request on the httplib connection object. httplib_response = self._make_request(conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked) # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Pass method to Response for length checking response_kw['request_method'] = method # Import httplib's response into our own wrapper object response = self.ResponseCls.from_httplib(httplib_response, pool=self, connection=response_conn, retries=retries, **response_kw) # Everything went great! clean_exit = True except queue.Empty: # Timed out by queue. raise EmptyPoolError(self, "No pool connections are available.") except (BaseSSLError, CertificateError) as e: # Close the connection. If a connection is reused on which there # was a Certificate error, the next request will certainly raise # another Certificate error. clean_exit = False raise SSLError(e) except SSLError: # Treat SSLError separately from BaseSSLError to preserve # traceback. clean_exit = False raise except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: # Discard the connection for these exceptions. It will be # be replaced during the next _get_conn() call. clean_exit = False if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: e = ProxyError('Cannot connect to proxy.', e) elif isinstance(e, (SocketError, HTTPException)): e = ProtocolError('Connection aborted.', e) retries = retries.increment(method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]) retries.sleep() # Keep track of the error for the retry warning. err = e finally: if not clean_exit: # We hit some kind of exception, handled or otherwise. We need # to throw the connection away unless explicitly told not to. # Close the connection, set the variable to None, and make sure # we put the None back in the pool to avoid leaking it. conn = conn and conn.close() release_this_conn = True if release_this_conn: # Put the connection back to be reused. If the connection is # expired then it will be None, which will get replaced with a # fresh connection during _get_conn. self._put_conn(conn) if not conn: # Try again log.warning("Retrying (%r) after connection " "broken by '%r': %s", retries, err, url) return self.urlopen(method, url, body, headers, retries, redirect, assert_same_host, timeout=timeout, pool_timeout=pool_timeout, release_conn=release_conn, body_pos=body_pos, > **response_kw) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py:678: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = method = 'GET', url = '/metadata/nasa', body = None headers = {'Connection': 'keep-alive', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'User-Agent': 'internetarchive/1.7.1 (Linux ; N; en; foo) Python/3.5.3'} retries = Retry(total=0, connect=0, read=3, redirect=0), redirect = False assert_same_host = False timeout = pool_timeout = None, release_conn = False, chunked = False, body_pos = None response_kw = {'decode_content': False, 'preload_content': False}, conn = None release_this_conn = True, err = None, clean_exit = False timeout_obj = is_new_proxy_conn = False def urlopen(self, method, url, body=None, headers=None, retries=None, redirect=True, assert_same_host=True, timeout=_Default, pool_timeout=None, release_conn=None, chunked=False, body_pos=None, **response_kw): """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method provided by :class:`.RequestMethods`, such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param body: Data to send in the request body (useful for creating POST requests, see HTTPConnectionPool.post_url for more convenience). :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. Pass ``None`` to retry until you receive a response. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When False, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``response_kw.get('preload_content', True)``. :param chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. :param \\**response_kw: Additional parameters are passed to :meth:`urllib3.response.HTTPResponse.from_httplib` """ if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = response_kw.get('preload_content', True) # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn # Merge the proxy headers. Only do this in HTTP. We have to copy the # headers dict so we can safely change it without those changes being # reflected in anyone else's copy. if self.scheme == 'http': headers = headers.copy() headers.update(self.proxy_headers) # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) if is_new_proxy_conn: self._prepare_proxy(conn) # Make the request on the httplib connection object. httplib_response = self._make_request(conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked) # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Pass method to Response for length checking response_kw['request_method'] = method # Import httplib's response into our own wrapper object response = self.ResponseCls.from_httplib(httplib_response, pool=self, connection=response_conn, retries=retries, **response_kw) # Everything went great! clean_exit = True except queue.Empty: # Timed out by queue. raise EmptyPoolError(self, "No pool connections are available.") except (BaseSSLError, CertificateError) as e: # Close the connection. If a connection is reused on which there # was a Certificate error, the next request will certainly raise # another Certificate error. clean_exit = False raise SSLError(e) except SSLError: # Treat SSLError separately from BaseSSLError to preserve # traceback. clean_exit = False raise except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: # Discard the connection for these exceptions. It will be # be replaced during the next _get_conn() call. clean_exit = False if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: e = ProxyError('Cannot connect to proxy.', e) elif isinstance(e, (SocketError, HTTPException)): e = ProtocolError('Connection aborted.', e) retries = retries.increment(method, url, error=e, _pool=self, > _stacktrace=sys.exc_info()[2]) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py:649: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=0, read=3, redirect=0), method = 'GET' url = '/metadata/nasa', response = None error = NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',) _pool = _stacktrace = def increment(self, method=None, url=None, response=None, error=None, _pool=None, _stacktrace=None): """ Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.HTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise six.reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect cause = 'unknown' status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise six.reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or not self._is_method_retryable(method): raise six.reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = 'too many redirects' redirect_location = response.get_redirect_location() status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and a the given method is in the whitelist cause = ResponseError.GENERIC_ERROR if response and response.status: cause = ResponseError.SPECIFIC_ERROR.format( status_code=response.status) status = response.status history = self.history + (RequestHistory(method, url, error, status, redirect_location),) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, history=history) if new_retry.is_exhausted(): > raise MaxRetryError(_pool, url, error or ResponseError(cause)) E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='archive.org', port=80): Max retries exceeded with url: /metadata/nasa (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',)) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError During handling of the above exception, another exception occurred: self = identifier = 'nasa', request_kwargs = {'timeout': 12} def get_metadata(self, identifier, request_kwargs=None): """Get an item's metadata from the `Metadata API `__ :type identifier: str :param identifier: Globally unique Archive.org identifier. :rtype: dict :returns: Metadat API response. """ request_kwargs = {} if not request_kwargs else request_kwargs url = '{0}//archive.org/metadata/{1}'.format(self.protocol, identifier) if 'timeout' not in request_kwargs: request_kwargs['timeout'] = 12 try: > resp = self.get(url, **request_kwargs) ../../../internetarchive-1.7.1/internetarchive/session.py:237: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = url = 'http://archive.org/metadata/nasa' kwargs = {'allow_redirects': True, 'timeout': 12} def get(self, url, **kwargs): """Sends a GET request. Returns :class:`Response` object. :param url: URL for the new :class:`Request` object. :param \*\*kwargs: Optional arguments that ``request`` takes. :rtype: requests.Response """ kwargs.setdefault('allow_redirects', True) > return self.request('GET', url, **kwargs) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/sessions.py:501: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = method = 'GET', url = 'http://archive.org/metadata/nasa', params = None data = None, headers = None, cookies = None, files = None, auth = None timeout = 12, allow_redirects = True, proxies = {}, hooks = None, stream = None verify = None, cert = None, json = None def request(self, method, url, params=None, data=None, headers=None, cookies=None, files=None, auth=None, timeout=None, allow_redirects=True, proxies=None, hooks=None, stream=None, verify=None, cert=None, json=None): """Constructs a :class:`Request `, prepares it and sends it. Returns :class:`Response ` object. :param method: method for the new :class:`Request` object. :param url: URL for the new :class:`Request` object. :param params: (optional) Dictionary or bytes to be sent in the query string for the :class:`Request`. :param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`. :param json: (optional) json to send in the body of the :class:`Request`. :param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`. :param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`. :param files: (optional) Dictionary of ``'filename': file-like-objects`` for multipart encoding upload. :param auth: (optional) Auth tuple or callable to enable Basic/Digest/Custom HTTP Auth. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple :param allow_redirects: (optional) Set to True by default. :type allow_redirects: bool :param proxies: (optional) Dictionary mapping protocol or protocol and hostname to the URL of the proxy. :param stream: (optional) whether to immediately download the response content. Defaults to ``False``. :param verify: (optional) whether the SSL cert will be verified. A CA_BUNDLE path can also be provided. Defaults to ``True``. :param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair. :rtype: requests.Response """ # Create the Request. req = Request( method = method.upper(), url = url, headers = headers, files = files, data = data or {}, json = json, params = params or {}, auth = auth, cookies = cookies, hooks = hooks, ) prep = self.prepare_request(req) proxies = proxies or {} settings = self.merge_environment_settings( prep.url, proxies, stream, verify, cert ) # Send the request. send_kwargs = { 'timeout': timeout, 'allow_redirects': allow_redirects, } send_kwargs.update(settings) > resp = self.send(prep, **send_kwargs) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/sessions.py:488: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = kwargs = {'allow_redirects': True, 'cert': None, 'proxies': OrderedDict(), 'stream': False, ...} insecure = False, w = [] def send(self, request, **kwargs): # Catch urllib3 warnings for HTTPS related errors. insecure = False with warnings.catch_warnings(record=True) as w: warnings.filterwarnings('always') > r = super(ArchiveSession, self).send(request, **kwargs) ../../../internetarchive-1.7.1/internetarchive/session.py:353: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = kwargs = {'cert': None, 'proxies': OrderedDict(), 'stream': False, 'timeout': 12, ...} allow_redirects = True, stream = False, hooks = {'response': []} checked_urls = set() adapter = start = datetime.datetime(2017, 8, 17, 20, 18, 20, 154309) def send(self, request, **kwargs): """ Send a given PreparedRequest. :rtype: requests.Response """ # Set defaults that the hooks can utilize to ensure they always have # the correct parameters to reproduce the previous request. kwargs.setdefault('stream', self.stream) kwargs.setdefault('verify', self.verify) kwargs.setdefault('cert', self.cert) kwargs.setdefault('proxies', self.proxies) # It's possible that users might accidentally send a Request object. # Guard against that specific failure case. if isinstance(request, Request): raise ValueError('You can only send PreparedRequests.') # Set up variables needed for resolve_redirects and dispatching of hooks allow_redirects = kwargs.pop('allow_redirects', True) stream = kwargs.get('stream') hooks = request.hooks # Resolve URL in redirect cache, if available. if allow_redirects: checked_urls = set() while request.url in self.redirect_cache: checked_urls.add(request.url) new_url = self.redirect_cache.get(request.url) if new_url in checked_urls: break request.url = new_url # Get the appropriate adapter to use adapter = self.get_adapter(url=request.url) # Start time (approximately) of the request start = datetime.utcnow() # Send the request > r = adapter.send(request, **kwargs) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/sessions.py:609: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = verify = True, cert = None, proxies = OrderedDict() def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple :param verify: (optional) Whether to verify SSL certificates. :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ conn = self.get_connection(request.url, proxies) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers(request) chunked = not (request.body is None or 'Content-Length' in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError as e: # this may raise a string formatting error. err = ("Invalid timeout {0}. Pass a (connect, read) " "timeout tuple, or a single float to set " "both timeouts to the same value".format(timeout)) raise ValueError(err) else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: if not chunked: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout ) # Send the request. else: if hasattr(conn, 'proxy_pool'): conn = conn.proxy_pool low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) try: low_conn.putrequest(request.method, url, skip_accept_encoding=True) for header, value in request.headers.items(): low_conn.putheader(header, value) low_conn.endheaders() for i in request.body: low_conn.send(hex(len(i))[2:].encode('utf-8')) low_conn.send(b'\r\n') low_conn.send(i) low_conn.send(b'\r\n') low_conn.send(b'0\r\n\r\n') # Receive the response from the server try: # For Python 2.7+ versions, use buffering of HTTP # responses r = low_conn.getresponse(buffering=True) except TypeError: # For compatibility with Python 2.6 versions and back r = low_conn.getresponse() resp = HTTPResponse.from_httplib( r, pool=conn, connection=low_conn, preload_content=False, decode_content=False ) except: # If we hit any problems here, clean up the connection. # Then, reraise so that we can handle the actual exception. low_conn.close() raise except (ProtocolError, socket.error) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='archive.org', port=80): Max retries exceeded with url: /metadata/nasa (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',)) /gnu/store/avxn9b7hva7p7lnbafyzvngsbsf8nwd0-python-requests-2.13.0/lib/python3.5/site-packages/requests/adapters.py:487: ConnectionError During handling of the above exception, another exception occurred: capsys = <_pytest.capture.CaptureFixture object at 0x7f9233fc2358> def test_ia(capsys): ia_call(['ia', '--help']) out, err = capsys.readouterr() assert 'A command line interface to Archive.org.' in out > ia_call(['ia', '--insecure', 'ls', 'nasa']) ../../../internetarchive-1.7.1/tests/cli/test_ia.py:9: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../internetarchive-1.7.1/tests/conftest.py:51: in ia_call ia.main() ../../../internetarchive-1.7.1/internetarchive/cli/ia.py:159: in main sys.exit(ia_module.main(argv, session)) ../../../internetarchive-1.7.1/internetarchive/cli/ia_list.py:46: in main item = session.get_item(args['']) ../../../internetarchive-1.7.1/internetarchive/session.py:214: in get_item item_metadata = self.get_metadata(identifier, request_kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = identifier = 'nasa', request_kwargs = {'timeout': 12} def get_metadata(self, identifier, request_kwargs=None): """Get an item's metadata from the `Metadata API `__ :type identifier: str :param identifier: Globally unique Archive.org identifier. :rtype: dict :returns: Metadat API response. """ request_kwargs = {} if not request_kwargs else request_kwargs url = '{0}//archive.org/metadata/{1}'.format(self.protocol, identifier) if 'timeout' not in request_kwargs: request_kwargs['timeout'] = 12 try: resp = self.get(url, **request_kwargs) resp.raise_for_status() except Exception as exc: error_msg = 'Error retrieving metadata from {0}, {1}'.format(url, exc) logger.error(error_msg) > raise type(exc)(error_msg) E requests.exceptions.ConnectionError: Error retrieving metadata from http://archive.org/metadata/nasa, HTTPConnectionPool(host='archive.org', port=80): Max retries exceeded with url: /metadata/nasa (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',)) ../../../internetarchive-1.7.1/internetarchive/session.py:242: ConnectionError --------------------------------- Captured log --------------------------------- session.py 213 DEBUG no metadata provided for "nasa", retrieving now. connectionpool.py 207 DEBUG Starting new HTTP connection (1): archive.org retry.py 378 DEBUG Incremented Retry for (url='/metadata/nasa'): Retry(total=2, connect=2, read=3, redirect=0) connectionpool.py 673 WARNING Retrying (Retry(total=2, connect=2, read=3, redirect=0)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',)': /metadata/nasa connectionpool.py 207 DEBUG Starting new HTTP connection (2): archive.org retry.py 378 DEBUG Incremented Retry for (url='/metadata/nasa'): Retry(total=1, connect=1, read=3, redirect=0) connectionpool.py 673 WARNING Retrying (Retry(total=1, connect=1, read=3, redirect=0)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',)': /metadata/nasa connectionpool.py 207 DEBUG Starting new HTTP connection (3): archive.org retry.py 378 DEBUG Incremented Retry for (url='/metadata/nasa'): Retry(total=0, connect=0, read=3, redirect=0) connectionpool.py 673 WARNING Retrying (Retry(total=0, connect=0, read=3, redirect=0)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',)': /metadata/nasa connectionpool.py 207 DEBUG Starting new HTTP connection (4): archive.org session.py 241 ERROR Error retrieving metadata from http://archive.org/metadata/nasa, HTTPConnectionPool(host='archive.org', port=80): Max retries exceeded with url: /metadata/nasa (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',)) _________________________________ test_no_args _________________________________ tmpdir_ch = local('/tmp/guix-build-python-internetarchive-1.7.1.drv-0/pytest-of-nixbld/pytest-0/test_no_args0') def test_no_args(tmpdir_ch): call_cmd('ia --insecure download nasa') > assert files_downloaded(path='nasa') == NASA_EXPECTED_FILES E AssertionError: assert set() == {'NASAarchiveLogo.jpg', 'g...xml', 'nasa_meta.xml', ...} E Extra items in the right set: E 'nasa_meta.xml' E 'nasa_archive.torrent' E 'NASAarchiveLogo.jpg' E 'nasa_reviews.xml' E 'nasa_files.xml' E 'globe_west_540.jpg' E 'globe_west_540_thumb.jpg' E Use -v to get the full diff ../../../internetarchive-1.7.1/tests/cli/test_ia_download.py:8: AssertionError __________________________________ test_https __________________________________ tmpdir_ch = local('/tmp/guix-build-python-internetarchive-1.7.1.drv-0/pytest-of-nixbld/pytest-0/test_https0') def test_https(tmpdir_ch): if sys.version_info < (2, 7, 9): stdout, stderr = call_cmd('ia download nasa', expected_exit_code=1) assert 'You are attempting to make an HTTPS' in stderr else: call_cmd('ia download nasa') > assert files_downloaded(path='nasa') == NASA_EXPECTED_FILES E AssertionError: assert set() == {'NASAarchiveLogo.jpg', 'g...xml', 'nasa_meta.xml', ...} E Extra items in the right set: E 'nasa_meta.xml' E 'nasa_archive.torrent' E 'NASAarchiveLogo.jpg' E 'nasa_reviews.xml' E 'nasa_files.xml' E 'globe_west_540.jpg' E 'globe_west_540_thumb.jpg' E Use -v to get the full diff ../../../internetarchive-1.7.1/tests/cli/test_ia_download.py:17: AssertionError _________________________________ test_dry_run _________________________________ def test_dry_run(): nasa_url = 'http://archive.org/download/nasa/' expected_urls = set([nasa_url + f for f in NASA_EXPECTED_FILES]) stdout, stderr = call_cmd('ia --insecure download --dry-run nasa') output_lines = stdout.split('\n') dry_run_urls = set([x.strip() for x in output_lines if x and 'nasa:' not in x]) > assert expected_urls == dry_run_urls E AssertionError: assert {'http://arch...eta.xml', ...} == set() E Extra items in the left set: E 'http://archive.org/download/nasa/globe_west_540_thumb.jpg' E 'http://archive.org/download/nasa/nasa_files.xml' E 'http://archive.org/download/nasa/nasa_reviews.xml' E 'http://archive.org/download/nasa/NASAarchiveLogo.jpg' E 'http://archive.org/download/nasa/nasa_archive.torrent' E 'http://archive.org/download/nasa/globe_west_540.jpg' E 'http://archive.org/download/nasa/nasa_meta.xml' E Use -v to get the full diff ../../../internetarchive-1.7.1/tests/cli/test_ia_download.py:28: AssertionError __________________________________ test_glob ___________________________________ tmpdir_ch = local('/tmp/guix-build-python-internetarchive-1.7.1.drv-0/pytest-of-nixbld/pytest-0/test_glob0') def test_glob(tmpdir_ch): expected_files = set([ 'globe_west_540.jpg', 'NASAarchiveLogo.jpg', 'globe_west_540_thumb.jpg' ]) call_cmd('ia --insecure download --glob="*jpg" nasa') > assert files_downloaded(path='nasa') == expected_files E AssertionError: assert set() == {'NASAarchiveLogo.jpg', 'g...'globe_west_540_thumb.jpg'} E Extra items in the right set: E 'globe_west_540.jpg' E 'globe_west_540_thumb.jpg' E 'NASAarchiveLogo.jpg' E Use -v to get the full diff ../../../internetarchive-1.7.1/tests/cli/test_ia_download.py:39: AssertionError _________________________________ test_format __________________________________ tmpdir_ch = local('/tmp/guix-build-python-internetarchive-1.7.1.drv-0/pytest-of-nixbld/pytest-0/test_format0') def test_format(tmpdir_ch): call_cmd('ia --insecure download --format="Archive BitTorrent" nasa') > assert files_downloaded(path='nasa') == set(['nasa_archive.torrent']) E AssertionError: assert set() == {'nasa_archive.torrent'} E Extra items in the right set: E 'nasa_archive.torrent' E Use -v to get the full diff ../../../internetarchive-1.7.1/tests/cli/test_ia_download.py:44: AssertionError _________________________________ test_clobber _________________________________ tmpdir_ch = local('/tmp/guix-build-python-internetarchive-1.7.1.drv-0/pytest-of-nixbld/pytest-0/test_clobber0') def test_clobber(tmpdir_ch): cmd = 'ia --insecure download nasa nasa_meta.xml' call_cmd(cmd) > assert files_downloaded('nasa') == set(['nasa_meta.xml']) E AssertionError: assert set() == {'nasa_meta.xml'} E Extra items in the right set: E 'nasa_meta.xml' E Use -v to get the full diff ../../../internetarchive-1.7.1/tests/cli/test_ia_download.py:50: AssertionError ________________________________ test_checksum _________________________________ tmpdir_ch = local('/tmp/guix-build-python-internetarchive-1.7.1.drv-0/pytest-of-nixbld/pytest-0/test_checksum0') def test_checksum(tmpdir_ch): call_cmd('ia --insecure download nasa nasa_meta.xml') > assert files_downloaded('nasa') == set(['nasa_meta.xml']) E AssertionError: assert set() == {'nasa_meta.xml'} E Extra items in the right set: E 'nasa_meta.xml' E Use -v to get the full diff ../../../internetarchive-1.7.1/tests/cli/test_ia_download.py:59: AssertionError _____________________________ test_no_directories ______________________________ tmpdir_ch = local('/tmp/guix-build-python-internetarchive-1.7.1.drv-0/pytest-of-nixbld/pytest-0/test_no_directories0') def test_no_directories(tmpdir_ch): call_cmd('ia --insecure download --no-directories nasa nasa_meta.xml') > assert files_downloaded('.') == set(['nasa_meta.xml']) E AssertionError: assert set() == {'nasa_meta.xml'} E Extra items in the right set: E 'nasa_meta.xml' E Use -v to get the full diff ../../../internetarchive-1.7.1/tests/cli/test_ia_download.py:69: AssertionError _________________________________ test_destdir _________________________________ tmpdir_ch = local('/tmp/guix-build-python-internetarchive-1.7.1.drv-0/pytest-of-nixbld/pytest-0/test_destdir0') def test_destdir(tmpdir_ch): cmd = 'ia --insecure download --destdir=thisdirdoesnotexist/ nasa nasa_meta.xml' stdout, stderr = call_cmd(cmd, expected_exit_code=1) assert '--destdir must be a valid path to a directory.' in stderr tmpdir_ch.mkdir('thisdirdoesnotexist/') call_cmd(cmd) > assert files_downloaded('thisdirdoesnotexist/nasa') == set(['nasa_meta.xml']) E AssertionError: assert set() == {'nasa_meta.xml'} E Extra items in the right set: E 'nasa_meta.xml' E Use -v to get the full diff ../../../internetarchive-1.7.1/tests/cli/test_ia_download.py:80: AssertionError ============================ pytest-warning summary ============================ WI1 /gnu/store/9mmg3cws531bybx4yv976f1s8dj3qir9-python-pytest-capturelog-0.7/lib/python3.5/site-packages/pytest_capturelog.py:171 'pytest_runtest_makereport' hook uses deprecated __multicall__ argument WC1 None pytest_funcarg__caplog: declaring fixtures using "pytest_funcarg__" prefix is deprecated and scheduled to be removed in pytest 4.0. Please remove the prefix and use the @pytest.fixture decorator instead. WC1 None pytest_funcarg__capturelog: declaring fixtures using "pytest_funcarg__" prefix is deprecated and scheduled to be removed in pytest 4.0. Please remove the prefix and use the @pytest.fixture decorator instead. =========== 11 failed, 94 passed, 3 pytest-warnings in 76.14 seconds =========== --8<---------------cut here---------------end--------------->8--- Oleg Pykhalov (1): gnu: python-internetarchive: Update to 1.7.1. gnu/packages/web.scm | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) -- 2.14.1 From debbugs-submit-bounces@debbugs.gnu.org Thu Aug 17 16:40:29 2017 Received: (at 28129) by debbugs.gnu.org; 17 Aug 2017 20:40:29 +0000 Received: from localhost ([127.0.0.1]:43198 helo=debbugs.gnu.org) by debbugs.gnu.org with esmtp (Exim 4.84_2) (envelope-from ) id 1diRaf-0001mt-Gm for submit@debbugs.gnu.org; Thu, 17 Aug 2017 16:40:29 -0400 Received: from mail-lf0-f44.google.com ([209.85.215.44]:34123) by debbugs.gnu.org with esmtp (Exim 4.84_2) (envelope-from ) id 1diRad-0001mh-T3 for 28129@debbugs.gnu.org; Thu, 17 Aug 2017 16:40:28 -0400 Received: by mail-lf0-f44.google.com with SMTP id g77so2497574lfg.1 for <28129@debbugs.gnu.org>; Thu, 17 Aug 2017 13:40:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:mime-version:content-disposition :content-description; bh=lnpZ1r6aa6FLDiK2mdJ3V0fc2ZAAb5z3Fyj1pCUg6rg=; b=BG6wPrX5VgdnfcTBna6Yn+gWizp6AEul2VZm7TSegzbF0N7REsJYzM5cL0nx6a77l5 Qn+IB/TiwgrdcO3IdQCyvvvKy2xRqlPJ+E/K5P6wPKYqJCE2B9pVQb6rYrh+u8ELXRA9 jA3p3r7LIxYRuKpDcnxACzVkENuGkGOLIJvYCTcbY2kQ6HdlCut//YgYcA1ICeYu8ZkB wYt3s9jFzN6nMr9WLxDx8xiVfs/grP2Jn3NQNHNoCxgOb4YCigj4STapi95YkCXsdm0/ vGA8CAZ6+ON5yNKHUMLBu7YQAfA0S7lRDqJaBszoLlmrgjgTvqYCzfdWighPyDZGZYon MSjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:mime-version :content-disposition:content-description; bh=lnpZ1r6aa6FLDiK2mdJ3V0fc2ZAAb5z3Fyj1pCUg6rg=; b=YFUf1cdzuNiaLyfZzZCRR2B3X+clXb1xK1SL/NebmZV+cuusRAtMb5Mcw7lRkcgsgJ mpiwA+Dwdmc70hbhJm7i4e0DtjHGYYgIQ8NQhaE3Bx+AUtPOcams0AnFzvpTFJw+Vqh1 hV+m7M3tHc6hQ1odp4+zqMD4r5y6qHZHRxP2JNNebb77DVmLig9yn944XpcQiHHwzXmh 2Tzj5hGmBP8E3+qvxMH/oRrSpWzlpflbGnNrHGUd4vRVfJp+YtxFhELrlN3hvpmKPw4J FcCW4/06V9r+3UL3qt0ZkckUbRNwUA4slawAAN4adaFgrdQlrtGSp/Gl+jTbFnMWAIP/ OXug== X-Gm-Message-State: AHYfb5hcyub+OAwRWjtTT5CKrAIgiE98yyMj+SBEInjp83LCC3pKyDHs QhwgHlS4qgMwA7iJ X-Received: by 10.46.81.66 with SMTP id b2mr2415996lje.0.1503002421791; Thu, 17 Aug 2017 13:40:21 -0700 (PDT) Received: from magnolia ([178.71.233.26]) by smtp.gmail.com with ESMTPSA id g66sm948913lfk.62.2017.08.17.13.40.20 for <28129@debbugs.gnu.org> (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 17 Aug 2017 13:40:20 -0700 (PDT) From: Oleg Pykhalov To: bug#28129 <28129@debbugs.gnu.org> Subject: [PATCH 1/1] gnu: python-internetarchive: Update to 1.7.1. Date: Thu, 17 Aug 2017 23:40:19 +0300 Message-ID: <87o9re9bd8.fsf@gmail.com> MIME-Version: 1.0 Content-Type: text/x-patch Content-Disposition: inline; filename=0001-gnu-python-internetarchive-Update-to-1.7.1.patch Content-Description: [PATCH 1/1] gnu: python-internetarchive: Update to 1.7.1. X-Spam-Score: 0.5 (/) X-Debbugs-Envelope-To: 28129 X-BeenThere: debbugs-submit@debbugs.gnu.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: debbugs-submit-bounces@debbugs.gnu.org Sender: "Debbugs-submit" X-Spam-Score: 0.5 (/) >From 96e133f69d05d0708a1dbe00a685c5cbb9b47224 Mon Sep 17 00:00:00 2001 From: Oleg Pykhalov Date: Thu, 17 Aug 2017 23:08:38 +0300 Subject: [PATCH 1/1] gnu: python-internetarchive: Update to 1.7.1. * gnu/packages/web.scm (python-internetarchive): Update to 1.7.1. --- gnu/packages/web.scm | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/gnu/packages/web.scm b/gnu/packages/web.scm index 5459a3051..08e51cbde 100644 --- a/gnu/packages/web.scm +++ b/gnu/packages/web.scm @@ -4698,7 +4698,7 @@ command-line arguments or read from stdin.") (define-public python-internetarchive (package (name "python-internetarchive") - (version "1.6.0") + (version "1.7.1") (source (origin (method url-fetch) @@ -4707,7 +4707,7 @@ command-line arguments or read from stdin.") (file-name (string-append name "-" version ".tar.gz")) (sha256 (base32 - "00v1489rv1ydcihwbdl7sqpcpmm98b9kqqlfggr32k0ndmv7ivas")))) + "1lj4r0y67mwjns2gcjvw0y7m5x0vqir2iv7s4q2y93492azli1qh")))) (build-system python-build-system) (arguments `(#:tests? #f ; 11 tests of 105 fail to mock "requests". -- 2.14.1 From debbugs-submit-bounces@debbugs.gnu.org Mon Aug 21 17:45:58 2017 Received: (at 28129-done) by debbugs.gnu.org; 21 Aug 2017 21:45:58 +0000 Received: from localhost ([127.0.0.1]:48269 helo=debbugs.gnu.org) by debbugs.gnu.org with esmtp (Exim 4.84_2) (envelope-from ) id 1djuWE-0003iv-JF for submit@debbugs.gnu.org; Mon, 21 Aug 2017 17:45:58 -0400 Received: from out1-smtp.messagingengine.com ([66.111.4.25]:54297) by debbugs.gnu.org with esmtp (Exim 4.84_2) (envelope-from ) id 1djuWD-0003in-B5 for 28129-done@debbugs.gnu.org; Mon, 21 Aug 2017 17:45:57 -0400 Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id EFEFF20BC8; Mon, 21 Aug 2017 17:45:56 -0400 (EDT) Received: from frontend2 ([10.202.2.161]) by compute5.internal (MEProxy); Mon, 21 Aug 2017 17:45:56 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastmail.com; h= content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to:x-me-sender:x-me-sender:x-sasl-enc :x-sasl-enc; s=fm1; bh=Eb0RNbv5VM22nskTnx6a6MoaAoMHxaSJS7JfMRQzQ Xw=; b=JZewg08IadpvTPEYlDVHA2rBLCumo/3epE0gD/EkAind9WznO4uqglEl4 6uTssQ0VjZLJ4xjydxO92EngB6zZw1I8KfzzFPjBY50JwjDIMZrn8PGTQpbpmTYZ Rm/jdLvYhnVNXmMMJDSZhLa8xuOCbohNs2BvFIsVlwQF9n23NAHESSPPAXNvFeu9 2aLnqZOjO0bRx85Kg7Nn/vQqeknKPpygMmUug/KGL1eKCRg8ceCYypkHk0zQe/Qt hqHRQZ6Rm3yUEu47Kxt8/xoSr4oNBOEZMSHG+RQ7+t4jrnRrZrVML2LoXfjTHDMX oUOs7DHbolo5VwppZ68qDMubse3LA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-sender :x-me-sender:x-sasl-enc:x-sasl-enc; s=fm1; bh=Eb0RNbv5VM22nskTnx 6a6MoaAoMHxaSJS7JfMRQzQXw=; b=eb+SZ+KAiRc0aOHHL6oWvVluvQG1b6K1Bf fB9Gjl0P9cExYFEj6A5GGP8/vDTX1BecfKozwi7TCZnfaA2604bSj6GwI+PNJ8JS TAx/UOKMmOS99CfYHdFCDvAXyBfzN3Tyd+Sqtdr7mPnriir+US+LpDmnmKTI8uZV H0klaQJZpPtSpCqrFqj9B8TIFFNHw0mhFtLsSyQAKZhr+0AKwlo3OZRT/OtdBKYt Q/XlIrco25UdQyeKdteqa/ycos10S7egW6iyf1I4ihocxaOZuZvZofMAWqqAG/58 TovIZu2NqB52qHBiG2B56vw6vKx6LHsf8BtCzkrOBmvXfxQSadIA== X-ME-Sender: X-Sasl-enc: hAOZvA8sckfqUPjfNUTi1IJswKbwJILnVVcn9ercRQRs 1503351956 Received: from localhost (unknown [188.113.81.93]) by mail.messagingengine.com (Postfix) with ESMTPA id 8311324604; Mon, 21 Aug 2017 17:45:56 -0400 (EDT) From: Marius Bakke To: Oleg Pykhalov , 28129-done@debbugs.gnu.org Subject: Re: [bug#28129] [PATCH 0/1] gnu: python-internetarchive: Update to 1.7.1. In-Reply-To: <877ey2aq5w.fsf@gmail.com> References: <877ey2aq5w.fsf@gmail.com> User-Agent: Notmuch/0.25 (https://notmuchmail.org) Emacs/25.2.1 (x86_64-unknown-linux-gnu) Date: Mon, 21 Aug 2017 23:45:54 +0200 Message-ID: <878tic4mst.fsf@fastmail.com> MIME-Version: 1.0 Content-Type: multipart/signed; boundary="=-=-="; micalg=pgp-sha512; protocol="application/pgp-signature" X-Spam-Score: -0.7 (/) X-Debbugs-Envelope-To: 28129-done X-BeenThere: debbugs-submit@debbugs.gnu.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: debbugs-submit-bounces@debbugs.gnu.org Sender: "Debbugs-submit" X-Spam-Score: -0.7 (/) --=-=-= Content-Type: text/plain Oleg Pykhalov writes: > https://debbugs.gnu.org/cgi/bugreport.cgi?bug=27699 > > Danny Milosavljevic writes: >> After I fixed up the test invocation, still 11 tests of 105 fail, >> apparently mostly because the Requests mock doesn't work. Could you >> take a look? > >> The mocking is done in tests/conftest.py in internetarchive-1.6.0. > > 11 failed, whose (maybe) all require internet connections. When Guix > build a package he has no networking inside chroot, has it? > > So, we cannot pass those tests. Could we just disable them selectively > (not all 105)? Hmf. It amazes me that pytest still has no "networking?" toggle. The tests can be disabled selectively on the command line with the "-k" switch. See 'python-pyopenssl' for an example. In this case it would be something like "not test_item_with_kwargs and not test_ia and not...". Would you like to try it? I applied this patch regardless since it's apparently not a new problem. Thanks! --=-=-= Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iQEzBAEBCgAdFiEEu7At3yzq9qgNHeZDoqBt8qM6VPoFAlmbVJMACgkQoqBt8qM6 VPr5uQf/YdIVB5DG7sWpji+2F86YKdrTuiy22u32TL9ht4cvol/9x+n/YDyx2AXf m9GaxuIiXQ7lwXtPLzIhC6Px/XUPF3Zczt8/NNoJLcujR+cYf3/F7UtKUt2Z7aSQ jYFLSMGlsf/opw78pMm6NzHEZjzGxmLOhRIPF0aJ8ntI92/GnUfocYHmPZbnmd9Y OoZcZpfT/pq9EjB2pwCTcQ0qPB8GHzBbCCIoGEXgSDDl73AbuGNjQrz401RN7Xij QPmchLOQp/Ylfg+xnH3XZmufOI/vPK2YZwdZOsN+/ciuduedDACmza8slv69xwom xu4ixNe6YNcWh539JZ3hUW+m+1G/0A== =3uBU -----END PGP SIGNATURE----- --=-=-=-- From unknown Sun Sep 21 05:22:23 2025 Received: (at fakecontrol) by fakecontrolmessage; To: internal_control@debbugs.gnu.org From: Debbugs Internal Request Subject: Internal Control Message-Id: bug archived. Date: Tue, 19 Sep 2017 11:24:03 +0000 User-Agent: Fakemail v42.6.9 # This is a fake control message. # # The action: # bug archived. thanks # This fakemail brought to you by your local debbugs # administrator