{"id":856,"date":"2024-09-11T20:43:59","date_gmt":"2024-09-11T20:43:59","guid":{"rendered":"https:\/\/rayobyte.com\/community\/?post_type=scraping_project&#038;p=856"},"modified":"2024-09-13T17:01:33","modified_gmt":"2024-09-13T17:01:33","slug":"automate-retail-price-monitoring-with-a-python-scraper","status":"publish","type":"scraping_project","link":"https:\/\/rayobyte.com\/community\/scraping-project\/automate-retail-price-monitoring-with-a-python-scraper\/","title":{"rendered":"Automate Retail Price Monitoring with a Python Scraper"},"content":{"rendered":"<h2 style=\"text-align: center;\">Watch how you can setup an automated retail price monitoring system using Python:<\/h2>\n<p style=\"text-align: center;\"><iframe loading=\"lazy\" title=\"YouTube video player\" src=\"https:\/\/www.youtube.com\/embed\/sGy8t3pS15k?si=lzo-y6wopzspvDn1\" width=\"560\" height=\"315\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<h2>Important links to get started<\/h2>\n<ul>\n<li><a href=\"https:\/\/github.com\/MDFARHYN\/retail_price_scraping\" target=\"_blank\" rel=\"nofollow noopener\">Download the full code from GitHub repo<\/a><\/li>\n<li><a href=\"https:\/\/chromewebstore.google.com\/detail\/selectorgadget\/mhjhnkcfbdhnjickkkdbjoemdmbfginb?hl=en&amp;pli=1\" rel=\"nofollow noopener\" target=\"_blank\">Chrome extensions download link<\/a><\/li>\n<\/ul>\n<h2><b>Introduction<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">Tracking the prices manually can be overwhelming. On top of that, prices change almost by the minute on big ecommerce sites like Amazon or Walmart which have multiple thousand transactions a day. And this is exactly where you require automated retail price monitoring. Automating the process saves businesses a lot of time, eliminates human error and keeps you informed on market trends as\/when they happen. Automated reports keep you thinking big\u2013not small.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In this tutorial, we are going to build an Automate Retail Price Monitoring system with help of python and\u00a0 see how you can perform what is known as web scraping using BeautifulSoup in Python. The code created here will help us monitor retail prices automatically. How to scrape prices from Amazon and Walmart, follow the price change over time, get alerted for when there is a substantial shift in costs. In this Automate Retail Price Monitoring with a Python Scraper tutorial we&#8217;ll provide you with a very strong tool to be sure your retail business stays ahead of the competitive curve.<\/span><\/p>\n<h1 style=\"text-align: left;\"><b>Automate Retail Price Monitoring with a Python Scraper<\/b><\/h1>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-857 size-full\" src=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/python_scraper.jpg\" alt=\"python_scraper\" width=\"1024\" height=\"1024\" title=\"\" srcset=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/python_scraper.jpg 1024w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/python_scraper-300x300.jpg 300w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/python_scraper-150x150.jpg 150w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/python_scraper-768x768.jpg 768w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/python_scraper-624x624.jpg 624w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p><b><i>Retail Price Monitoring **System**<\/i><\/b><\/p>\n<p><span style=\"font-weight: 400;\">Price monitoring allows you to track what your competitors are pricing, not only so that you can react to changes but also so that they give context &#8220;the bigger picture.&#8221; This way you will be able to predict the course taken by other players, as well as their potential transformations prior to any mutations begin. If you observe the same type of product prices repeatedly declining this may signify that demand is decreasing or there are new models coming. Information like that can be critical to outpacing the competition and making key moves that lead your company towards revenue generation in an increasingly competitive business environment.<\/span><\/p>\n<h2><b>Tools and Libraries Needed<\/b><\/h2>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-859 size-full aligncenter\" src=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/pandas.jpg\" alt=\"pandas\" width=\"1024\" height=\"1024\" title=\"\" srcset=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/pandas.jpg 1024w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/pandas-300x300.jpg 300w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/pandas-150x150.jpg 150w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/pandas-768x768.jpg 768w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/pandas-624x624.jpg 624w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">In this article, we will first learn what tools and libraries you need to set up your python environment for the same. This will be the Singleton class on which we can rely to implement our ultimate price monitoring system.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We will go over the key Python libraries needed (a high-level overview):<\/span><\/p>\n<p><b>BeautifulSoup :<\/b><span style=\"font-weight: 400;\"> This library provides tools for scraping information from web pages which are written in HTML or XML.<\/span><\/p>\n<p><b>Requests<\/b><span style=\"font-weight: 400;\"> : This is the library you can use to make GET and POST request, in this case we will get web pages where price were available for scanning<\/span><\/p>\n<p><b>Pandas: <\/b><span style=\"font-weight: 400;\">Pandas is a built in data manipulation library. Great for organizing and cleaning the data you scrape from web pages.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">You should install them to start. If you do not have it yet, install some with this command based on Python&#8217;s package manager `pip`. In your command line or terminal run:<\/span><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\">pip install beautifulsoup4\r\n\r\npip install requests\r\n\r\npip install pandas\r\n\r\npip install lxml<\/pre>\n<p><span style=\"font-weight: 400;\">This will download the libraries and install them so that you can use it in your Python scripts.<\/span><\/p>\n<h2><b>Web Scraping Introduction<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">Now that you have your tools of choice, let&#8217;s track back a little and talk about web scraping. Web scraping is just extracting data automatically from websites. Taking the information from web pages and manually copying it is no longer required because you can write a script that will do it for you \u2014 faster, more reliable with less errors.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This will give us the ability to scrape products prices from part of e-commerce websites like Amazon and Walmart. This data is what we will then rely on for monitoring price overtime.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">But you should know that scraping can require legal steps. Most websites have a terms of services that details what you can and cannot do. Certain websites may have a clear ban on scraping, or the requests can be allowed in limited quantities, and done for less restrictive means. Amazon and Walmart, for instance does not allow data scraping their platforms automatically.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Here are some of the best practices that you must consider to not fall on the wrong side of law:<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><strong>&#8211; Respect the <code>robots. txt<\/code> file:<\/strong> most of the sites has `robots. txt file which describes where spiders are allowed to crawl the site and where they can not. Please inspect your copy of this file and follow the indications that it gives you.<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><strong>\u2013 Server overloading:<\/strong> Never ask for more request into short span. It may cause a load on the server and lead to you being blocked due of your IP address.<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><strong>\u2013 Proxies as needed: <\/strong><\/span>To prevent blocking and throttling, it&#8217;s essential to spread your large volume of requests across multiple IPs. When scraping websites, getting blocked is a common issue, which is why integrating a proxy into your scraper is necessary.<\/p>\n<p><span style=\"font-weight: 400;\">This allows you to use web scraping in a safe and ethical manner while also making an effective retail price monitoring system. This way you can avoid breaking any rules and guideline of the site that you are working with while gathering the data.<\/span><\/p>\n<h2><b>Scraping Basics<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">So before we get to know how you can scrape specific website[s] let us start with the basics of web scraping The idea behind web scraping is that you should be able to take a webpage and programmatically extract the content from it so that you too could do something useful with all those useless pieces of data.<\/span><\/p>\n<ol>\n<li><b> Making HTTP Requests:<\/b><\/li>\n<\/ol>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-861 size-full aligncenter\" src=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/http.jpg\" alt=\"http\" width=\"1024\" height=\"1024\" title=\"\" srcset=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/http.jpg 1024w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/http-300x300.jpg 300w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/http-150x150.jpg 150w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/http-768x768.jpg 768w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/http-624x624.jpg 624w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">The initial step in web scraping is sending an HTTP request to the website that you want to scrape. In Python this is done through the Requests library. The response to the request is simply a webpage with HTML content (which you&#8217;ll extract data from) when extracted.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8211; Here\u2019s a simple example:<\/span><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\">import requests\r\n\r\nurl = \"https:\/\/quotes.toscrape.com\/\"\r\n\r\ndownload_content\u00a0 = requests.get(url).text #we are downloading the website html page using python request library\r\n\r\nprint(download_content)<\/pre>\n<p><span style=\"font-weight: 400;\">\u2013 In this code we are hitting a GET call to<strong> \u201cquotes.toscrape\u201d<\/strong> and output is the response received from <strong>&#8220;https:\/\/quotes.toscrape.com\/\u201d<\/strong> and save the HTML content of the page in download_content variable.<\/span><\/p>\n<ol start=\"2\">\n<li><b> Parsing HTML:<\/b><\/li>\n<\/ol>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-863 size-full\" src=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/bs4.jpg\" alt=\"bs4\" width=\"1024\" height=\"1024\" title=\"\" srcset=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/bs4.jpg 1024w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/bs4-300x300.jpg 300w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/bs4-150x150.jpg 150w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/bs4-768x768.jpg 768w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/bs4-624x624.jpg 624w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p>Now that you have the HTML content, you need to parse the data. This is where the <code>BeautifulSoup<\/code> library comes in. The purpose of <code>BeautifulSoup<\/code> is to target specific HTML elements on the page using various methods such as <code>soup.find<\/code>, <code>soup.select<\/code>, and others<\/p>\n<p><span style=\"font-weight: 400;\">&#8211; <strong>Parsing the HTML content:<\/strong><\/span><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">import requests\u00a0\r\n\r\nfrom bs4 import BeautifulSoup\r\n\r\nurl = \"https:\/\/quotes.toscrape.com\/\"\r\n\r\ndownload_content\u00a0 = requests.get(url).text #we are downloading the website html page using python request library\r\n\r\nsoup = BeautifulSoup(download_content,'lxml') # here we are parsing the content using BeautifulSoup\r\n\r\nprint(soup)<\/pre>\n<p><span style=\"font-weight: 400;\">&#8211; Now, you can search for elements of interest in <code>soup<\/code> like product prices by their tags|Classes |Ids (using everything that we know so far). You can read more by visiting <\/span><a href=\"https:\/\/beautiful-soup-4.readthedocs.io\/en\/latest\/#kinds-of-objects\" rel=\"nofollow noopener\" target=\"_blank\"><span style=\"font-weight: 400;\">beautifulsoup documentation<\/span><\/a><\/p>\n<h2><b>Extracting Prices from Amazon<\/b><\/h2>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-865 size-full\" src=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/amazon.jpg\" alt=\"amazon\" width=\"1024\" height=\"1024\" title=\"\" srcset=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/amazon.jpg 1024w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/amazon-300x300.jpg 300w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/amazon-150x150.jpg 150w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/amazon-768x768.jpg 768w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/amazon-624x624.jpg 624w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">For implementation, just understand the concept and we will scrape product prices from Amazon.<\/span><\/p>\n<ol>\n<li><b> Identify the Target Element:<\/b><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">\u2013 First: you need to go to the Amazon product page and check price element. You can do this by right clicking on the price, and choosing \u201cInspect\u201d in your browser. Find some sort of specific identifier \u2014 either an ID, a classname or both; whatever you can use to point your scraper towards the price.<\/span><\/p>\n<ol start=\"2\">\n<li><b> Write the Scraping Code:<\/b><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">&#8211; Get the HTML content of the page using `Requests` and extract only price from it with BeautifulSoup as soon you know which element to target<\/span><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\">import requests\r\n\r\nfrom bs4 import BeautifulSoup\r\n\r\nimport smtplib\r\n\r\nimport csv\r\n\r\nimport os\r\n\r\nfrom datetime import datetime\r\n\r\n\r\n\r\nurl = \"https:\/\/www.amazon.com\/Fitbit-Management-Intensity-Tracking-Midnight\/dp\/B0B5F9SZW7\/?_encoding=UTF8&amp;pd_rd_w=raGwi&amp;content-id=amzn1.sym.9929d3ab-edb7-4ef5-a232-26d90f828fa5&amp;pf_rd_p=9929d3ab-edb7-4ef5-a232-26d90f828fa5&amp;pf_rd_r=A1B0XQ919M066QVE71VN&amp;pd_rd_wg=Aw2vX&amp;pd_rd_r=69a343dc-b5f2-4e2a-ae85-2ca4e3945a26&amp;ref_=pd_hp_d_btf_crs_zg_bs_3375251&amp;th=1\"\r\n\r\n\r\n\r\n# Scraping the price from the webpage\r\n\r\ndownload_content = requests.get(url).text #downloading the html using python request\u00a0\r\n\r\nsoup = BeautifulSoup(download_content, 'lxml') #parsing the HTML content using BeautifulSoup\r\n\r\nprice = soup.find(\"span\", class_=\"a-price-whole\") #using soup.find method for target price\r\n\r\n\r\n\r\n\r\n# Cleaning and converting the price\r\n\r\nprice = price.text.strip().replace('.', '')\r\n\r\nprice = float(price)\r\n\r\n\r\n\r\n\r\n# Get the current date\r\n\r\ncurrent_date = datetime.now().strftime(\"%Y-%m-%d\")\r\n\r\n\r\n\r\n\r\n# File name for the CSV file\r\n\r\nfile_name = 'price_data.csv'\r\n\r\n\r\n\r\n\r\n# Function to save price data to CSV\r\n\r\ndef save_to_csv(url, price):\r\n\r\n\u00a0\u00a0\u00a0\u00a0file_exists = os.path.isfile(file_name)\r\n\r\n\u00a0\u00a0\u00a0\u00a0\r\n\r\n\u00a0\u00a0\u00a0\u00a0with open(file_name, 'a', newline='') as csvfile:\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0fieldnames = ['Date', 'URL', 'Price']\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0writer = csv.DictWriter(csvfile, fieldnames=fieldnames)\r\n\r\n\r\n\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# Write header only if the file does not exist\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0if not file_exists:\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0writer.writeheader()\r\n\r\n\r\n\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# Write data row\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0writer.writerow({'Date': current_date, 'URL': url, 'Price': price})\r\n\r\n\r\n\r\n\r\n\u00a0\u00a0\u00a0\u00a0print(f\"Data saved to {file_name}\")\r\n\r\n\r\n\r\n\r\n# Function to send an email alert\r\n\r\ndef send_email():\r\n\r\n\u00a0\u00a0\u00a0\u00a0email = \"your email\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0receiver_email = \"receiver_email\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0subject = \"Walmart Price Alert\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0message = f\"Great news! The price has dropped. The new price is now {price}!\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0text = f\"Subject:{subject}nn{message}\"\r\n\r\n\r\n\r\n\r\n\u00a0\u00a0\u00a0\u00a0server = smtplib.SMTP(\"smtp.gmail.com\", 587)\r\n\r\n\u00a0\u00a0\u00a0\u00a0server.starttls()\r\n\r\n\u00a0\u00a0\u00a0\u00a0server.login(email, \"your app password\")\r\n\r\n\u00a0\u00a0\u00a0\u00a0server.sendmail(email, receiver_email, text)\r\n\r\n\u00a0\u00a0\u00a0\u00a0server.quit()\r\n\r\n\u00a0\u00a0\u00a0\u00a0print(\"Email sent!\")\r\n\r\n\r\n\r\n\r\n# Function to check price and notify if needed\r\n\r\ndef check_and_notify():\r\n\r\n\u00a0\u00a0\u00a0\u00a0if price &lt; 80:\u00a0 # Threshold price for notification\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0send_email()\r\n\r\n\r\n\r\n\r\n# Save the scraped data to CSV\r\n\r\nsave_to_csv(url, price)<\/pre>\n<p><span style=\"font-weight: 400;\"><strong>Code Explanation:\u00a0<\/strong> This line, <strong>`<\/strong><\/span><span style=\"font-weight: 400;\"><strong>requests.get(url).text`<\/strong> <\/span><span style=\"font-weight: 400;\">, makes an HTTP request using Python&#8217;s requests library and downloads the HTTP content. Then, in <strong>`<\/strong><\/span><strong>BeautifulSoup(download_content, &#8216;lxml&#8217;)`<\/strong><span style=\"font-weight: 400;\">, we parse the content using BeautifulSoup and use the<strong> `<\/strong><\/span><strong>soup.find`<\/strong><span style=\"font-weight: 400;\"> method to target the price. The `<\/span><strong>send_email()`<\/strong><span style=\"font-weight: 400;\"> function is responsible for sending the email notification. The `<\/span><strong>check_and_notify()`<\/strong><span style=\"font-weight: 400;\"> function only sends an email if the price is less than 80 or our given threshold. Finally, after finishing the scraping, we save the results in a CSV file, which looks like this:<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-867 size-large\" src=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/csv-1024x539.png\" alt=\"csv\" width=\"1024\" height=\"539\" title=\"\" srcset=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/csv-1024x539.png 1024w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/csv-300x158.png 300w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/csv-768x404.png 768w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/csv-1536x809.png 1536w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/csv-624x329.png 624w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/csv.png 1924w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<ol start=\"3\">\n<li><b> Handle Potential Issues:<\/b><\/li>\n<\/ol>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-869 size-full aligncenter\" src=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/avoid_issue.jpg\" alt=\"\" width=\"1024\" height=\"1024\" title=\"\" srcset=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/avoid_issue.jpg 1024w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/avoid_issue-300x300.jpg 300w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/avoid_issue-150x150.jpg 150w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/avoid_issue-768x768.jpg 768w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/avoid_issue-624x624.jpg 624w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">&#8211; Amazon is notorious for changing their HTML structure more than Dr Jekyll adjusts his personality. Be sure to set a proxy as well, so that Amazon does not block you. There are many proxy providers available in the market; you can use any of them. I am using<a href=\"https:\/\/rayobyte.com\/\"> Rayobyte Proxy<\/a>. Here is code how you can use\u00a0 proxy in your scraper<\/span><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\">import requests\r\n\r\nfrom bs4 import BeautifulSoup\r\n\r\nimport smtplib\r\n\r\nimport csv\r\n\r\nimport os\r\n\r\nfrom datetime import datetime\r\n\r\n\r\n\r\n\r\n# Proxy and headers configuration for avoid blocking\r\n\r\nproxies = {\r\n\r\n\u00a0\u00a0\u00a0\u00a0\"https\": \"http:\/\/PROXY_USERNAME:PROXY_PASS@PROXY_SERVER:PROXY_PORT\/\"\r\n\r\n}\r\n\r\n\r\n\r\n\r\nheaders = {\r\n\r\n\u00a0\u00a0\u00a0\u00a0'user-agent': 'Mozilla\/5.0 (X11; Linux x86_64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/87.0.4280.141 Safari\/537.36',\r\n\r\n\u00a0\u00a0\u00a0\u00a0'content-encoding': 'gzip'\r\n\r\n}\r\n\r\n\r\n\r\n\r\nurl = \"https:\/\/www.amazon.com\/Fitbit-Management-Intensity-Tracking-Midnight\/dp\/B0B5F9SZW7\/?_encoding=UTF8&amp;pd_rd_w=raGwi&amp;content-id=amzn1.sym.9929d3ab-edb7-4ef5-a232-26d90f828fa5&amp;pf_rd_p=9929d3ab-edb7-4ef5-a232-26d90f828fa5&amp;pf_rd_r=A1B0XQ919M066QVE71VN&amp;pd_rd_wg=Aw2vX&amp;pd_rd_r=69a343dc-b5f2-4e2a-ae85-2ca4e3945a26&amp;ref_=pd_hp_d_btf_crs_zg_bs_3375251&amp;th=1\"\r\n\r\n\r\n\r\n\r\n# Scraping the price from the webpage\r\n\r\ndownload_content = requests.get(url,proxies=proxies,headers=headers).text #downloading the html using python request and also using proxy for avoid blocking\r\n\r\nsoup = BeautifulSoup(download_content, 'lxml') #parsing the HTML content using BeautifulSoup\r\n\r\nprice = soup.find(\"span\", class_=\"a-price-whole\")\u00a0 #using soup.find method for target price\r\n\r\n\r\n\r\n\r\n\r\n# Cleaning and converting the price\r\n\r\nprice = price.text.strip().replace('.', '')\r\n\r\nprice = float(price)\r\n\r\n\r\n\r\n\r\n# Get the current date\r\n\r\ncurrent_date = datetime.now().strftime(\"%Y-%m-%d\")\r\n\r\n\r\n\r\n\r\n# File name for the CSV file\r\n\r\nfile_name = 'price_data.csv'\r\n\r\n\r\n\r\n\r\n# Function to save price data to CSV\r\n\r\ndef save_to_csv(url, price):\r\n\r\n\u00a0\u00a0\u00a0\u00a0file_exists = os.path.isfile(file_name)\r\n\r\n\u00a0\u00a0\u00a0\u00a0\r\n\r\n\u00a0\u00a0\u00a0\u00a0with open(file_name, 'a', newline='') as csvfile:\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0fieldnames = ['Date', 'URL', 'Price']\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0writer = csv.DictWriter(csvfile, fieldnames=fieldnames)\r\n\r\n\r\n\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# Write header only if the file does not exist\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0if not file_exists:\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0writer.writeheader()\r\n\r\n\r\n\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# Write data row\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0writer.writerow({'Date': current_date, 'URL': url, 'Price': price})\r\n\r\n\r\n\r\n\r\n\u00a0\u00a0\u00a0\u00a0print(f\"Data saved to {file_name}\")\r\n\r\n\r\n\r\n\r\n# Function to send an email alert\r\n\r\ndef send_email():\r\n\r\n\u00a0\u00a0\u00a0\u00a0email = \"your email\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0receiver_email = \"receiver_email\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0subject = \"Amazon Price Alert\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0message = f\"Great news! The price has dropped. The new price is now {price}!\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0text = f\"Subject:{subject}nn{message}\"\r\n\r\n\r\n\r\n\r\n\u00a0\u00a0\u00a0\u00a0server = smtplib.SMTP(\"smtp.gmail.com\", 587)\r\n\r\n\u00a0\u00a0\u00a0\u00a0server.starttls()\r\n\r\n\u00a0\u00a0\u00a0\u00a0server.login(email, \"your app password\")\r\n\r\n\u00a0\u00a0\u00a0\u00a0server.sendmail(email, receiver_email, text)\r\n\r\n\u00a0\u00a0\u00a0\u00a0server.quit()\r\n\r\n\u00a0\u00a0\u00a0\u00a0print(\"Email sent!\")\r\n\r\n\r\n\r\n\r\n# Function to check price and notify if needed\r\n\r\ndef check_and_notify():\r\n\r\n\u00a0\u00a0\u00a0\u00a0if price &lt; 80:\u00a0 # Threshold price for notification\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0send_email()\r\n\r\n\r\n\r\n\r\n# Save the scraped data to CSV\r\n\r\nsave_to_csv(url, price)\r\n\r\n<\/pre>\n<h2><b>Extract Prices from Walmart<\/b><\/h2>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-871 size-full\" src=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/walmart.jpg\" alt=\"\" width=\"1024\" height=\"1024\" title=\"\" srcset=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/walmart.jpg 1024w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/walmart-300x300.jpg 300w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/walmart-150x150.jpg 150w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/walmart-768x768.jpg 768w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/walmart-624x624.jpg 624w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">Moving to the next, Now let&#8217;s go ahead with scraping prices from Walmart.<\/span><\/p>\n<ol>\n<li><b> Inspect the Product Page:<\/b><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">\u2013 Just like Amazon, Check the Walmart Product Page to trackdown Price Elements. Walmart has different html structure, so look for particular class names or IDs.<\/span><\/p>\n<ol start=\"2\">\n<li><strong> Write the Scraping Code:<\/strong><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">Now that we have the element, here is how you can scrape his price<\/span><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\">import requests\r\n\r\nfrom bs4 import BeautifulSoup\r\n\r\nimport smtplib\r\n\r\nimport csv\r\n\r\nimport os\r\n\r\nfrom datetime import datetime\r\n\r\n\r\n\r\n\r\n# Proxy and headers configuration\r\n\r\nproxies = {\r\n\r\n\u00a0\u00a0\u00a0\u00a0\"https\": \"http:\/\/PROXY_USERNAME:PROXY_PASS@PROXY_SERVER:PROXY_PORT\/\"\r\n\r\n}\r\n\r\n\r\n\r\n\r\nheaders = {\r\n\r\n\u00a0\u00a0\u00a0\u00a0'user-agent': 'Mozilla\/5.0 (X11; Linux x86_64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/87.0.4280.141 Safari\/537.36',\r\n\r\n\u00a0\u00a0\u00a0\u00a0'content-encoding': 'gzip'\r\n\r\n}\r\n\r\n\r\n\r\n\r\nurl = \"https:\/\/www.walmart.com\/ip\/Men-s-G-Shock-GA100L-8A-Tan-Silicone-Japanese-Quartz-Sport-Watch\/166515367?classType=REGULAR\"\r\n\r\n\r\n\r\n\r\n# Scraping the price from the webpage\r\n\r\ndownload_content = requests.get(url, headers=headers).text\r\n\r\nsoup = BeautifulSoup(download_content, 'lxml')\r\n\r\nprice = soup.find(\"span\",{\"\":\"price\"})\u00a0\r\n\r\nprice =\u00a0 price.text\r\n\r\nprice = price.replace(\"Now $\",\"\")\r\n\r\nprice = float(price)\r\n\r\n\r\n\r\n\r\n# Get the current date\r\n\r\ncurrent_date = datetime.now().strftime(\"%Y-%m-%d\")\r\n\r\n\r\n\r\n\r\n# File name for the CSV file\r\n\r\nfile_name = 'price_data.csv'\r\n\r\n\r\n\r\n\r\n# Function to save price data to CSV\r\n\r\ndef save_to_csv(url, price):\r\n\r\n\u00a0\u00a0\u00a0\u00a0file_exists = os.path.isfile(file_name)\r\n\r\n\u00a0\u00a0\u00a0\u00a0\r\n\r\n\u00a0\u00a0\u00a0\u00a0with open(file_name, 'a', newline='') as csvfile:\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0fieldnames = ['Date', 'URL', 'Price']\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0writer = csv.DictWriter(csvfile, fieldnames=fieldnames)\r\n\r\n\r\n\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# Write header only if the file does not exist\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0if not file_exists:\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0writer.writeheader()\r\n\r\n\r\n\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# Write data row\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0writer.writerow({'Date': current_date, 'URL': url, 'Price': price})\r\n\r\n\r\n\r\n\r\n\u00a0\u00a0\u00a0\u00a0print(f\"Data saved to {file_name}\")\r\n\r\n\r\n\r\n\r\n# Function to send an email alert\r\n\r\ndef send_email():\r\n\r\n\u00a0\u00a0\u00a0\u00a0email = \"your email\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0receiver_email = \"receiver_email\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0subject = \"Walmart Price Alert\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0message = f\"Great news! The price has dropped. The new price is now {price}!\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0text = f\"Subject:{subject}nn{message}\"\r\n\r\n\r\n\r\n\r\n\u00a0\u00a0\u00a0\u00a0server = smtplib.SMTP(\"smtp.gmail.com\", 587)\r\n\r\n\u00a0\u00a0\u00a0\u00a0server.starttls()\r\n\r\n\u00a0\u00a0\u00a0\u00a0server.login(email, \"your app password\")\r\n\r\n\u00a0\u00a0\u00a0\u00a0server.sendmail(email, receiver_email, text)\r\n\r\n\u00a0\u00a0\u00a0\u00a0server.quit()\r\n\r\n\u00a0\u00a0\u00a0\u00a0print(\"Email sent!\")\r\n\r\n\r\n\r\n\r\n# Function to check price and notify if needed\r\n\r\ndef check_and_notify():\r\n\r\n\u00a0\u00a0\u00a0\u00a0if price &lt; 80:\u00a0 # Threshold price for notification\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0send_email()\r\n\r\n\r\n\r\n\r\n# Save the scraped data to CSV\r\n\r\nsave_to_csv(url, price)<\/pre>\n<p><span style=\"font-weight: 400;\">This code just fetch the price of product form Walmart : And remember, monitor changes in Walmart&#8217;s HTML structure just like you would with Amazon.<\/span><\/p>\n<h2><b>\u2014 Setting up Price Monitoring System<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">We will use the Gmail SMTP server, and we need to create an app password for the SMTP settings. If two-factor authentication is not enabled on your Gmail account, please enable it first, then go to the App Password page.<\/span><\/p>\n<p><strong>Enter the name of your app and click on &#8216;Create&#8217;.<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-873 size-large\" src=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/screenshot_1-1024x497.png\" alt=\"\" width=\"1024\" height=\"497\" title=\"\" srcset=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/screenshot_1-1024x497.png 1024w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/screenshot_1-300x146.png 300w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/screenshot_1-768x373.png 768w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/screenshot_1-1536x746.png 1536w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/screenshot_1-624x303.png 624w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/screenshot_1.png 1600w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p><strong>Copy the password and store it in a secure place, then click &#8216;Done&#8217;<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-875 size-full\" src=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/screen_shot_2.png\" alt=\"screen_shot_2\" width=\"644\" height=\"530\" title=\"\" srcset=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/screen_shot_2.png 644w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/screen_shot_2-300x247.png 300w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/09\/screen_shot_2-624x514.png 624w\" sizes=\"auto, (max-width: 644px) 100vw, 644px\" \/><\/p>\n<h2><b>Setting Up Alerts<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">A price monitoring system needs to be able to alert you if there is a great change in the market. You can add this signal an alert that will let you know whenever the price crosses either above or below your predefined limits so you do not need to manually check for changes. That way, you always know what is happening and can make necessary adjustments to market changes.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Let me show you how to enable email alerts for price changes.<\/span><\/p>\n<ol>\n<li><b> Workflow to Choose an Alert Trigger:<\/b><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">First \u2014 Determine what is a \u201csignificant\u201d price change for your use case. This could be any greater-than-X-change or a Y-dollar amount. You may for instance be interested in getting notified if the price of a product drops by more than 5%, or $5;<\/span><\/p>\n<ol start=\"2\">\n<li><b> Sending Email Alerts With Python<\/b><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">smtplib \u2014 A core Python library for sending emails directly from your script. This can be automated easily so that if a major price change is registered by your scraper, it mails you an alert for the same.<\/span><\/p>\n<ol start=\"3\">\n<li><b> Example of Code Send Email Alerts:<\/b><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">Here is a tutorial on setting up email alerts step by step.<\/span><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\">import smtplib\r\n\r\n\r\n\r\n\r\n\r\ndef send_email():\r\n\r\n\u00a0\u00a0\u00a0\u00a0email = \"your_email\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0receiver_email = \"receiver_email\"\r\n\r\n\r\n\r\n\r\n\u00a0\u00a0\u00a0\u00a0subject = \"price alert\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0message = f\"Great news! The price has dropped. The new price is now {price}!\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0text = f\"Subject:{subject}nn{message}\"\r\n\r\n\r\n\r\n\r\n\u00a0\u00a0\u00a0\u00a0server = smtplib.SMTP(\"smtp.gmail.com\",587)\r\n\r\n\u00a0\u00a0\u00a0\u00a0server.starttls()\r\n\r\n\u00a0\u00a0\u00a0\u00a0server.login(email,\"app password\")\r\n\r\n\u00a0\u00a0\u00a0\u00a0server.sendmail(email,receiver_email,text)\r\n\r\n\u00a0\u00a0\u00a0\u00a0print(\"email sent\")\r\n\r\n\u00a0\r\n\r\n\u00a0\r\n\r\ndef check_and_notify():\r\n\r\n\u00a0\u00a0\u00a0\u00a0if price &lt; 80:\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0send_email()<\/pre>\n<p><strong>&#8211; Explanation:<\/strong><\/p>\n<p><span style=\"font-weight: 400;\"><strong><code>send_email()<\/code><\/strong>\u00a0Function: Sends an email with subject and body that is sent to a given email address. This will connect to your email server via the <strong><code>smtplib<\/code><\/strong>\u00a0library (in this case, it&#8217;s gmail) and send a message.<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><strong>\u2013 <code>check_and_notify<\/code><\/strong>\u00a0Function: This function will check if price difference reaches to the threshold (in-sample $80). If it does, calls send_email to notify.<\/span><\/p>\n<ol start=\"4\">\n<li><b> Customizing the Alert:<\/b><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">The alert message can be extended or modified to include details like the URL of the product, when did it go out of stock and a link back to its page.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">If nothing else, you are notified quickly if a price will be changed drastically. This can be especially useful in markets where the timing is crucial. And with auto alerts, you can respond in real time to change your own price or make a more informed decision on if buying at that moment is optimal and track it over the years.<\/span><\/p>\n<h2><b>Legal and Ethical Concerns<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">When web scraping, it is important to do so responsibly and lawfully. Scraping the web can be a powerful method to get and structurize information but we must understand that websites are an asset of someone. Failing to adhere to these rules can get you into legal trouble, your IP banned or you could have negative aspects on how people see and interact with the content.<\/span><\/p>\n<p><b>Compliance With Website Terms of Service<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Always pay a visit to the terms of service (ToS) page before you start scraping a website. A lot of websites Noted that web scraping is allowed or not. For instance, for personal use some e-commerce sites may allow scraping but do not give you the rights to use it commercially. Breaking these terms could get you in trouble or even blacklisted.<\/span><\/p>\n<p><b>Understanding robots.txt:<\/b><\/p>\n<p><span style=\"font-weight: 400;\">The robots. What is a robots.txt file?. It describes what parts of the site can and cannot be accessed with automated tools. While the robots. The.txt is not legally a binding document but we generally agree to follow what it says for good practice. It even can result in your scraper being blocked or IP blacklisted.<\/span><\/p>\n<p><b>Preventing Over Load on Servers<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Scraping uses multiple requests to the server for extracting data. For example if your scraper sends Frequent requests in a short interval than the server can be overloaded and will slow down or it could even crash. These are a bit shady but moreover your scraper could be very easily labelled as malicious if the other end recognizes you&#8217;re scraping their website too much. Too get around this always put your requests in a queue with a delay between each one and only make X amount of attempts per second.<\/span><\/p>\n<p><b>Always Pay Due Care about Personal Data:<\/b><\/p>\n<p><span style=\"font-weight: 400;\">For example \u2014 in case you scrap user reviews or comments your script will also start to scrape personal data, which should be handled responsibly. Similarly you will need to make sure your solution complies with data protection regulations, for example GDPR (which regulates how personal data can be collected stored and used) if targeting Europe-based consumers.<\/span><\/p>\n<ul>\n<li><a href=\"https:\/\/github.com\/MDFARHYN\/retail_price_scraping\" rel=\"nofollow noopener\" target=\"_blank\">Click here for download the full code from GitHub repo<\/a><\/li>\n<li><a href=\"https:\/\/www.youtube.com\/watch?v=sGy8t3pS15k&amp;t=79s\" rel=\"nofollow noopener\" target=\"_blank\">Watch the full-tutorial on youtube\u00a0<\/a><\/li>\n<li><span style=\"font-weight: 400;\">Chrome extensions download link :\u00a0 <\/span><a href=\"https:\/\/chromewebstore.google.com\/detail\/selectorgadget\/mhjhnkcfbdhnjickkkdbjoemdmbfginb?hl=en&amp;pli=1\" rel=\"nofollow noopener\" target=\"_blank\"><span style=\"font-weight: 400;\">click here for download from chrome\u00a0 store\u00a0\u00a0<\/span><\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Watch how you can setup an automated retail price monitoring system using Python: Important links to get started Download the full code from GitHub repo&hellip;<\/p>\n","protected":false},"author":23,"featured_media":877,"comment_status":"open","ping_status":"closed","template":"","meta":{"rank_math_lock_modified_date":false},"categories":[],"class_list":["post-856","scraping_project","type-scraping_project","status-publish","has-post-thumbnail","hentry"],"_links":{"self":[{"href":"https:\/\/rayobyte.com\/community\/wp-json\/wp\/v2\/scraping_project\/856","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rayobyte.com\/community\/wp-json\/wp\/v2\/scraping_project"}],"about":[{"href":"https:\/\/rayobyte.com\/community\/wp-json\/wp\/v2\/types\/scraping_project"}],"author":[{"embeddable":true,"href":"https:\/\/rayobyte.com\/community\/wp-json\/wp\/v2\/users\/23"}],"replies":[{"embeddable":true,"href":"https:\/\/rayobyte.com\/community\/wp-json\/wp\/v2\/comments?post=856"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rayobyte.com\/community\/wp-json\/wp\/v2\/media\/877"}],"wp:attachment":[{"href":"https:\/\/rayobyte.com\/community\/wp-json\/wp\/v2\/media?parent=856"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rayobyte.com\/community\/wp-json\/wp\/v2\/categories?post=856"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}