<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
		>

<channel>
	<title>Rayobyte Community | Astghik Kendra | Activity</title>
	<link>https://rayobyte.com/community/members/astghikkendra/activity/</link>
	<atom:link href="https://rayobyte.com/community/members/astghikkendra/activity/feed/" rel="self" type="application/rss+xml" />
	<description>Activity feed for Astghik Kendra.</description>
	<lastBuildDate>Mon, 06 Apr 2026 06:44:03 +0000</lastBuildDate>
	<generator>https://buddypress.org/?v=2.6.80</generator>
	<language>en-US</language>
	<ttl>30</ttl>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>2</sy:updateFrequency>
		
								<item>
				<guid isPermaLink="false">097b20b9257637c3a1e414161db2336a</guid>
				<title>Astghik Kendra replied to the discussion Scrape product name, price, and rating from a Thai e-commerce site with Node.js? in the forum General Web Scraping</title>
				<link>https://rayobyte.com/community/discussion/scrape-product-name-price-and-rating-from-a-thai-e-commerce-site-with-node-js/#post-2520</link>
				<pubDate>Thu, 12 Dec 2024 10:53:46 +0000</pubDate>

									<content:encoded><![CDATA[<p class = "activity-discussion-title-wrap"><a href="https://rayobyte.com/community/discussion/scrape-product-name-price-and-rating-from-a-thai-e-commerce-site-with-node-js/#post-2577"><span class="bb-reply-lable">Reply to</span> Scrape product name, price, and rating from a Thai e-commerce site with Node.js?</a></p> <div class="bb-content-inr-wrap"><p>To scrape the product price, you can use Puppeteer to extract the price located in a span or div with a class related to pricing. Ensure that the element containing the price has fully loaded by waiting for the selector to appear. Once loaded, you can extract the price text and format it for use.</p>
<pre><p>const puppeteer = require('puppeteer');
(async ()&hellip;</p></pre>
<p><span class="activity-read-more" id="activity-read-more-2196"><a href="https://rayobyte.com/community/discussion/scrape-product-name-price-and-rating-from-a-thai-e-commerce-site-with-node-js/#post-2520" rel="nofollow"> Read more</a></span></p>
</div>]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">b8bbe2441fb211af157ad1b0399e296b</guid>
				<title>Astghik Kendra replied to the discussion Scrape product name, price, and stock from Tops Thailand using Puppeteer? in the forum General Web Scraping</title>
				<link>https://rayobyte.com/community/discussion/scrape-product-name-price-and-stock-from-tops-thailand-using-puppeteer/#post-2523</link>
				<pubDate>Thu, 12 Dec 2024 10:52:16 +0000</pubDate>

									<content:encoded><![CDATA[<p class = "activity-discussion-title-wrap"><a href="https://rayobyte.com/community/discussion/scrape-product-name-price-and-stock-from-tops-thailand-using-puppeteer/#post-2576"><span class="bb-reply-lable">Reply to</span> Scrape product name, price, and stock from Tops Thailand using Puppeteer?</a></p> <div class="bb-content-inr-wrap"><p>For scraping stock availability, Puppeteer is ideal for extracting information like “in stock” or “out of stock” text. Once the product page loads, find the availability status in a div or span with a specific class. Using page.$eval(), you can target that specific element to check whether the product is available for purchase.</p>
<pre><p>const&hellip;</p></pre>
<p><span class="activity-read-more" id="activity-read-more-2195"><a href="https://rayobyte.com/community/discussion/scrape-product-name-price-and-stock-from-tops-thailand-using-puppeteer/#post-2523" rel="nofollow"> Read more</a></span></p>
</div>]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">6deb78350633455390388231acb3625b</guid>
				<title>Astghik Kendra started the discussion Scrape product specifications, images, shipping details -Amazon Brazil -Python in the forum General Web Scraping</title>
				<link>https://rayobyte.com/community/forums/general-web-scraping/</link>
				<pubDate>Thu, 12 Dec 2024 10:49:58 +0000</pubDate>

									<content:encoded><![CDATA[<p class = "activity-discussion-title-wrap"><a href="https://rayobyte.com/community/discussion/scrape-product-specifications-images-shipping-details-amazon-brazil-python/">Scrape product specifications, images, shipping details -Amazon Brazil -Python</a></p> <div class="bb-content-inr-wrap"><p>To scrape product specifications from Amazon Brazil, use Selenium to locate the product details section, often formatted as a table or list. Extract the specifications using find_elements and iterate through the rows or items to collect the data.</p>
<pre><p>from selenium import webdriver
from selenium.webdriver.common.by import By</p><p>driver&hellip;</p></pre>
<p><span class="activity-read-more" id="activity-read-more-2194"><a href="https://rayobyte.com/community/forums/general-web-scraping/" rel="nofollow"> Read more</a></span></p>
</div>]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">742105c063303ddb75cac9abf81ded4e</guid>
				<title>Astghik Kendra changed their photo</title>
				<link>https://rayobyte.com/community/news-feed/p/2193/</link>
				<pubDate>Thu, 12 Dec 2024 10:46:44 +0000</pubDate>

				
									<slash:comments>0</slash:comments>
				
							</item>
					<item>
				<guid isPermaLink="false">86d196061faf43a5201943aa92a1c9c9</guid>
				<title>Astghik Kendra became a registered member</title>
				<link>https://rayobyte.com/community/news-feed/p/2192/</link>
				<pubDate>Thu, 12 Dec 2024 10:42:03 +0000</pubDate>

				
									<slash:comments>0</slash:comments>
				
							</item>
		
	</channel>
</rss>
		
<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/

Object Caching 93/146 objects using Disk
Page Caching using Disk: Enhanced (Page is feed) 
Lazy Loading (feed)

Served from: rayobyte.com @ 2026-04-06 21:25:00 by W3 Total Cache
-->