How can I get the file size from a link without downloading it in python?


I have a list of links that I am trying to get the size of to determine how much computational resources each file need. Is it possible to just get the file size with a get request or something similar?

Here is an example of one of the links:



To do this use the HTTP HEAD method which just grabs the header information for the URL and doesn’t download the content like an HTTP GET request does.

$curl -I
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 18 Mar 2019 16:56:35 GMT
Content-Type: application/octet-stream
Content-Length: 578220087
Last-Modified: Tue, 21 Feb 2017 12:13:19 GMT
Connection: keep-alive
Accept-Ranges: bytes

The file size is in the ‘Content-Length’ header. In Python 3.6:

>>> import urllib
>>> req = urllib.request.Request('', 
>>> f = urllib.request.urlopen(req)
>>> f.status
>>> f.headers['Content-Length']

Answered By – Steven Graham

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply

(*) Required, Your email will not be published