<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
		>

<channel>
	<title>Rayobyte Community | Monica Nerva | Activity</title>
	<link>https://rayobyte.com/community/members/monicanerva/activity/</link>
	<atom:link href="https://rayobyte.com/community/members/monicanerva/activity/feed/" rel="self" type="application/rss+xml" />
	<description>Activity feed for Monica Nerva.</description>
	<lastBuildDate>Mon, 06 Apr 2026 06:44:03 +0000</lastBuildDate>
	<generator>https://buddypress.org/?v=2.6.80</generator>
	<language>en-US</language>
	<ttl>30</ttl>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>2</sy:updateFrequency>
		
								<item>
				<guid isPermaLink="false">2fea27209f77a23933857ec68591888e</guid>
				<title>Monica Nerva replied to the discussion Scrape product name, price, stock availability from Argos UK using Go and Colly? in the forum General Web Scraping</title>
				<link>https://rayobyte.com/community/discussion/scrape-product-name-price-stock-availability-from-argos-uk-using-go-and-colly/#post-2578</link>
				<pubDate>Fri, 13 Dec 2024 11:03:59 +0000</pubDate>

									<content:encoded><![CDATA[<p class = "activity-discussion-title-wrap"><a href="https://rayobyte.com/community/discussion/scrape-product-name-price-stock-availability-from-argos-uk-using-go-and-colly/#post-2675"><span class="bb-reply-lable">Reply to</span> Scrape product name, price, stock availability from Argos UK using Go and Colly?</a></p> <div class="bb-content-inr-wrap"><p>For stock availability, use Colly to find the element that indicates whether the product is in stock or out of stock. Extract the text from the element, which might show terms like “In Stock” or “Out of Stock.”</p>
<pre><p></p><p>package main</p>import (<p></p><p>	"fmt"</p><p>	"log"</p><p>	"github.com/gocolly/colly"</p><p>)</p><p>func main() {</p><p>	// Create a new collector</p><p>	c := colly.NewCollector()</p><p>	//&hellip;</p></pre>
<p><span class="activity-read-more" id="activity-read-more-2341"><a href="https://rayobyte.com/community/discussion/scrape-product-name-price-stock-availability-from-argos-uk-using-go-and-colly/#post-2578" rel="nofollow"> Read more</a></span></p>
</div>]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">e74dbf7b43b543e80cc97060a4475ac0</guid>
				<title>Monica Nerva replied to the discussion How can I scrape product features, price, promotions from Ponto Frio Brazil? in the forum General Web Scraping</title>
				<link>https://rayobyte.com/community/discussion/how-can-i-scrape-product-features-price-promotions-from-ponto-frio-brazil/#post-2572</link>
				<pubDate>Fri, 13 Dec 2024 11:02:23 +0000</pubDate>

									<content:encoded><![CDATA[<p class = "activity-discussion-title-wrap"><a href="https://rayobyte.com/community/discussion/how-can-i-scrape-product-features-price-promotions-from-ponto-frio-brazil/#post-2674"><span class="bb-reply-lable">Reply to</span> How can I scrape product features, price, promotions from Ponto Frio Brazil?</a></p> <div class="bb-content-inr-wrap"><p>To scrape promotions, locate the section where discounts or special offers are displayed. Use page.$eval() to extract the promotion text. This is often presented as a percentage discount or special note within the product section.</p>
<pre><p>const puppeteer = require('puppeteer');
(async () =&gt; {</p><p>    const browser = await puppeteer.launch({ headless: true&hellip;</p></pre>
<p><span class="activity-read-more" id="activity-read-more-2340"><a href="https://rayobyte.com/community/discussion/how-can-i-scrape-product-features-price-promotions-from-ponto-frio-brazil/#post-2572" rel="nofollow"> Read more</a></span></p>
</div>]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">69657ddd865a49b865fb365cc418184b</guid>
				<title>Monica Nerva started the discussion Extract shipping policies from Ceneo Poland using Node.js in the forum General Web Scraping</title>
				<link>https://rayobyte.com/community/forums/general-web-scraping/</link>
				<pubDate>Fri, 13 Dec 2024 10:59:54 +0000</pubDate>

									<content:encoded><![CDATA[<p class = "activity-discussion-title-wrap"><a href="https://rayobyte.com/community/discussion/extract-shipping-policies-from-ceneo-poland-using-node-js/">Extract shipping policies from Ceneo Poland using Node.js</a></p> <div class="bb-content-inr-wrap"><p>Ceneo is one of the most popular price comparison websites in Poland, helping customers find the best deals on various products. Scraping shipping policies from Ceneo involves focusing on the delivery details that are often provided by individual sellers. These policies typically include shipping costs, estimated delivery times, and options&hellip;</p>
<p><span class="activity-read-more" id="activity-read-more-2339"><a href="https://rayobyte.com/community/forums/general-web-scraping/" rel="nofollow"> Read more</a></span></p>
</div>]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">5ea4772be4a1f46a1470d457cb2dd002</guid>
				<title>Monica Nerva changed their photo</title>
				<link>https://rayobyte.com/community/news-feed/p/2338/</link>
				<pubDate>Fri, 13 Dec 2024 10:56:56 +0000</pubDate>

				
									<slash:comments>0</slash:comments>
				
							</item>
					<item>
				<guid isPermaLink="false">95955b287e3df2e3ea6ba281e2325ad0</guid>
				<title>Monica Nerva became a registered member</title>
				<link>https://rayobyte.com/community/news-feed/p/2337/</link>
				<pubDate>Fri, 13 Dec 2024 10:56:24 +0000</pubDate>

				
									<slash:comments>0</slash:comments>
				
							</item>
		
	</channel>
</rss>
		
<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/

Object Caching 67/162 objects using Disk
Page Caching using Disk: Enhanced (Page is feed) 
Lazy Loading (feed)

Served from: rayobyte.com @ 2026-04-06 15:22:38 by W3 Total Cache
-->