Issue
I am performing web scraping in via Python \ Selenium \ Chrome headless driver which involves executing a loop:
# perform loop
CustId=2000
while (CustId<=3000):
# Part 1: Customer REST call:
urlg = f'https://mywebsite.com/customerRest/show/?id={CustId}'
driver.get(urlg)
soup = BeautifulSoup(driver.page_source,"lxml")
dict_from_json = json.loads(soup.find("body").text)
#logic for webscraping is here......
CustId = CustId+1
# close driver at end of everything
driver.close()
However, sometime the page might not exist when the customer ID is certain number. I have no control over this and the code stops with page not found 404 error. How do I ignore this though and just move on with the loop?
I’m guessing I need a TRY….EXCEPT though?
Solution
You can check the page body h1
tag what the text appeared when it comes 404 error
and then you can put that in if clause to check if not then go inside the block.
CustId=2000
while (CustId<=3000):
urlg = f'https://mywebsite.com/customerRest/show/?id={CustId}'
driver.get(urlg)
soup = BeautifulSoup(driver.page_source,"lxml")
if not "Page not found" in soup.find("body").text:
dict_from_json = json.loads(soup.find("body").text)
#logic for webscraping is here......
CustId=CustId+1
Or
CustId=2000
while (CustId<=3000):
urlg = f'https://mywebsite.com/customerRest/show/?id={CustId}'
driver.get(urlg)
soup = BeautifulSoup(driver.page_source,"lxml")
if not "404" in soup.find("body").text:
dict_from_json = json.loads(soup.find("body").text)
#logic for webscraping is here......
CustId=CustId+1
Answered By – KunduK
This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0