CURL on list of links to save only specific links along with specific response


Hi All i have list of links, through curl I am getting their response by using this command
curl -K file.txt.

I am saving response to file.

But the problem is I am getting rough response(collectively all curl request result in one file),
I want to filter the response in two possible ways.

1 – End result in file should be organized like this URL against each response or maybe if other possible organized way.


(includes all responded html)

2 – Once first part is done, i want to keep only those urls and response which have specfic keywords.
i want only "helloworld" in urls response to stay in final result file.
if there is no particular keyword in response against any link from list remove those links and their result from file.

list of urls


This example is assuming that file.txt exist.

It will test the existence of HelloWorld in the response.

The If statements had been written on 3 lines for readability.

The responses.txt will contain the result.

An intermediary file is used to store each http response in turn: /var/tmp/response-tmp.txt

Give a try to this:

rm -f responses.txt 2>/dev/null
for url in $(sed -n 's/^[[:blank:]]*url[[:blank:]]*=[[:blank:]]*//gIp' file.txt); do
  curl -w "http_code %{http_code}" -Ns --url "${url}" > /var/tmp/response-tmp.txt 2>/dev/null
  if [[ $? -eq 0 ]] ; then
    if tail -1 /var/tmp/response-tmp.txt 2>/dev/null | grep -aq "http_code 2[0-9][0-9]$" 2>/dev/null ; then 
      if grep -aq "${regex}" /var/tmp/response-tmp.txt 2>/dev/null ; then
        printf "Heading %s\nResponse " "${url}"
        head -c -13 response-tmp.txt
done >> responses.txt

Answered By – Jay jargot

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply

(*) Required, Your email will not be published