Abugahh / web_scraper

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

web_scraper

the get request from the requests library does not download the content to your machine as a file. Instead, it retrieves the content of the web page and holds it in memory. You can then access this content directly in your Python script without saving it to a file.

How requests.get Works

When you use requests.get(url), it sends a request to the web server hosting the page. The server responds with the content of the page, which is then stored in a Response object. You can access this content through various attributes and methods of the Response object.

BeautifulSoup4 is a handy tool that makes it easy to find and extract information from web pages. By using it with the requests library, you can download and analyze web content to get the data you need.

Once you have the HTML content in memory, you can parse it with BeautifulSoup without saving it to a file

What is PyAutoGUI?

PyAutoGUI is a Python library that allows you to automate mouse and keyboard actions. You can use it to move the mouse, click, drag, type, press keyboard keys, take screenshots, and more.Applications: automated tasks,gamimg

Resources:

https://github.com/Abugahh/30-Days-Of-Python/blob/master/22_Day_Web_scraping/22_web_scraping.md https://www.geeksforgeeks.org/python-web-scraping-tutorial/

WOD dont be afraid to be seen trying

#pip freeze

About


Languages

Language:Python 100.0%