<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
		>

<channel>
	<title>Rayobyte Community | Gerel Tomislav | Activity</title>
	<link>https://rayobyte.com/community/members/gereltomislav/activity/</link>
	<atom:link href="https://rayobyte.com/community/members/gereltomislav/activity/feed/" rel="self" type="application/rss+xml" />
	<description>Activity feed for Gerel Tomislav.</description>
	<lastBuildDate>Mon, 06 Apr 2026 06:44:03 +0000</lastBuildDate>
	<generator>https://buddypress.org/?v=2.6.80</generator>
	<language>en-US</language>
	<ttl>30</ttl>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>2</sy:updateFrequency>
		
								<item>
				<guid isPermaLink="false">985b106c7741872d88304cdcf87b5f49</guid>
				<title>Gerel Tomislav replied to the discussion Scrape customer reviews from Tesco Lotus Thailand using Node.js and Puppeteer? in the forum General Web Scraping</title>
				<link>https://rayobyte.com/community/discussion/scrape-customer-reviews-from-tesco-lotus-thailand-using-node-js-and-puppeteer/#post-2517</link>
				<pubDate>Fri, 13 Dec 2024 06:36:46 +0000</pubDate>

									<content:encoded><![CDATA[<p class = "activity-discussion-title-wrap"><a href="https://rayobyte.com/community/discussion/scrape-customer-reviews-from-tesco-lotus-thailand-using-node-js-and-puppeteer/#post-2624"><span class="bb-reply-lable">Reply to</span> Scrape customer reviews from Tesco Lotus Thailand using Node.js and Puppeteer?</a></p> <div class="bb-content-inr-wrap"><p>To scrape customer reviews from Tesco Lotus Thailand using Puppeteer, you’ll need to handle dynamic content. As you navigate to a product page, Puppeteer will allow you to wait for the reviews section to load fully. Extracting review details such as user ratings and comments can be done using CSS selectors. It&#8217;s also important to handle any&hellip;</p>
<p><span class="activity-read-more" id="activity-read-more-2262"><a href="https://rayobyte.com/community/discussion/scrape-customer-reviews-from-tesco-lotus-thailand-using-node-js-and-puppeteer/#post-2517" rel="nofollow"> Read more</a></span></p>
</div>]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">2002f56c9680ce737308774d90a1f0df</guid>
				<title>Gerel Tomislav replied to the discussion Scrape product availability, price from Central Thailand’s using Python n Scrapy in the forum General Web Scraping</title>
				<link>https://rayobyte.com/community/discussion/scrape-product-availability-price-from-central-thailands-using-python-n-scrapy/#post-2511</link>
				<pubDate>Fri, 13 Dec 2024 06:35:45 +0000</pubDate>

									<content:encoded><![CDATA[<p class = "activity-discussion-title-wrap"><a href="https://rayobyte.com/community/discussion/scrape-product-availability-price-from-central-thailands-using-python-n-scrapy/#post-2623"><span class="bb-reply-lable">Reply to</span> Scrape product availability, price from Central Thailand’s using Python n Scrapy</a></p> <div class="bb-content-inr-wrap"><p>Scraping Central Thailand requires attention to both static and dynamic content, especially for prices and product availability. Scrapy excels at parsing static content, but for more complex sites with dynamic loading, you might need to ensure that you are targeting the correct elements. Scraping the prices and availability status can&hellip;</p>
<p><span class="activity-read-more" id="activity-read-more-2260"><a href="https://rayobyte.com/community/discussion/scrape-product-availability-price-from-central-thailands-using-python-n-scrapy/#post-2511" rel="nofollow"> Read more</a></span></p>
</div>]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">6fe6d7cd763c495fe9ba7f49e0309640</guid>
				<title>Gerel Tomislav started the discussion Extract reviews, pricing, product specifications from Tesco UK using Node.js in the forum General Web Scraping</title>
				<link>https://rayobyte.com/community/forums/general-web-scraping/</link>
				<pubDate>Fri, 13 Dec 2024 06:31:21 +0000</pubDate>

									<content:encoded><![CDATA[<p class = "activity-discussion-title-wrap"><a href="https://rayobyte.com/community/discussion/extract-reviews-pricing-product-specifications-from-tesco-uk-using-node-js/">Extract reviews, pricing, product specifications from Tesco UK using Node.js</a></p> <div class="bb-content-inr-wrap"><p>Scraping data from Tesco UK requires careful planning to handle the dynamic content commonly found on their website. The first step is to identify the key elements on the webpage that correspond to customer reviews, pricing trends, and product specifications. This can be done by inspecting the HTML structure using browser developer tools.&hellip;</p>
<p><span class="activity-read-more" id="activity-read-more-2258"><a href="https://rayobyte.com/community/forums/general-web-scraping/" rel="nofollow"> Read more</a></span></p>
</div>]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">caf984edbf341c4c5ea9b56dcaf36ed4</guid>
				<title>Gerel Tomislav changed their photo</title>
				<link>https://rayobyte.com/community/news-feed/p/2257/</link>
				<pubDate>Fri, 13 Dec 2024 05:57:23 +0000</pubDate>

				
									<slash:comments>0</slash:comments>
				
							</item>
					<item>
				<guid isPermaLink="false">275e38f9ff61ffd220b1504ae90b43f8</guid>
				<title>Gerel Tomislav became a registered member</title>
				<link>https://rayobyte.com/community/news-feed/p/2256/</link>
				<pubDate>Fri, 13 Dec 2024 05:54:59 +0000</pubDate>

				
									<slash:comments>0</slash:comments>
				
							</item>
		
	</channel>
</rss>
		
<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/

Object Caching 63/170 objects using Disk
Page Caching using Disk: Enhanced (Page is feed) 
Lazy Loading (feed)

Served from: rayobyte.com @ 2026-04-06 10:28:45 by W3 Total Cache
-->