{"id":1581,"date":"2024-11-12T21:28:47","date_gmt":"2024-11-12T21:28:47","guid":{"rendered":"https:\/\/rayobyte.com\/community\/?post_type=scraping_project&#038;p=1581"},"modified":"2024-11-15T15:31:54","modified_gmt":"2024-11-15T15:31:54","slug":"build-a-youtube-scraper-in-python-to-extract-video-data","status":"publish","type":"scraping_project","link":"https:\/\/rayobyte.com\/community\/scraping-project\/build-a-youtube-scraper-in-python-to-extract-video-data\/","title":{"rendered":"Build a YouTube Scraper in Python to Extract Video Data"},"content":{"rendered":"<p><iframe loading=\"lazy\" title=\"Build a YouTube Scraper in Python to Extract Video Data #python #webscraper  #webscraping\" width=\"640\" height=\"360\" src=\"https:\/\/www.youtube.com\/embed\/tXiD9XnCBXg?feature=oembed&#038;enablejsapi=1&#038;origin=https:\/\/rayobyte.com\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p><a href=\"https:\/\/github.com\/MDFARHYN\/-YouTube-Scraper-Python\" rel=\"nofollow noopener\" target=\"_blank\">Download the full source code from GitHub<\/a><\/p>\n<h1><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-1585 size-full\" src=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/YouTube_Scraper_Python_Thumbnail.png\" alt=\"YouTube_Scraper_Python_Thumbnail\" width=\"1024\" height=\"1024\" title=\"\" srcset=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/YouTube_Scraper_Python_Thumbnail.png 1024w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/YouTube_Scraper_Python_Thumbnail-300x300.png 300w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/YouTube_Scraper_Python_Thumbnail-150x150.png 150w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/YouTube_Scraper_Python_Thumbnail-768x768.png 768w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/YouTube_Scraper_Python_Thumbnail-624x624.png 624w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/h1>\n<h1>Table of content<\/h1>\n<ul>\n<li><a href=\"#Introduction\"><b>Introduction<\/b><\/a><\/li>\n<li><a href=\"#Getting started with the YouTube API\"><strong>Getting started with the YouTube API<\/strong><\/a><\/li>\n<li><a href=\"#Setup Google Cloud Project\"><strong>Setup Google Cloud Project<\/strong><\/a><\/li>\n<li><a href=\"#YouTube Scraper Based On Keywords\"><strong>YouTube Scraper Based On Keywords<\/strong><\/a><\/li>\n<li><a href=\"#Detailed Video Stats via YouTube API\"><strong>Detailed Video Stats via YouTube API<\/strong><\/a><\/li>\n<li><a href=\"#Get the video\u2019s data via the API\"><strong>Get the video\u2019s data via the API<\/strong><\/a><\/li>\n<li><a href=\"#API Rate Limitations\"><strong>API Rate Limitations<\/strong><\/a><\/li>\n<li><a href=\"#Youtube Scraping without API\"><strong>Youtube Scraping without API<\/strong><\/a><\/li>\n<li><a href=\"#Selenium Stealth\"><strong>Selenium Stealth<\/strong><\/a><\/li>\n<li><a href=\"#JSON-LD Data\"><strong>JSON-LD Data<\/strong><\/a><\/li>\n<li><a href=\"#Related Videos\"><strong>Related Videos<\/strong><\/a><\/li>\n<li><a href=\"#Bypassing Bot Detection with Proxies\"><strong>Bypassing Bot Detection with Proxies<\/strong><\/a><\/li>\n<\/ul>\n<h1 id=\"Introduction\"><b>Introduction<\/b><\/h1>\n<p><span style=\"font-weight: 400;\">YouTube can provide a wealth of information regarding various trends and audience insights which may help you with content analysis through approaches like popular topic tracking. This tutorial will mainly look over two approaches to scrape YouTube:<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\"><b>Fetches data using YouTube API<\/b><span style=\"font-weight: 400;\"> An official method to fetch structured data directly from YouTube, including video titles, descriptions, views, likes, comments, and more.<\/span><\/li>\n<li style=\"font-weight: 400;\"><b>Scraping without API <\/b><span style=\"font-weight: 400;\">using Selenium Stealth to evade the bot detection and directly scrape data from website.<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">You will learn both strategies in the end and can choose which is more appropriate for your project requirements.<\/span><\/p>\n<h1 id=\"Getting started with the YouTube API\"><b>Getting started with the YouTube API<\/b><\/h1>\n<p><span style=\"font-weight: 400;\">Now, lets get to work:<\/span><\/p>\n<p><b id=\"Setup Google Cloud Project\">\u00a0Setup Google Cloud Project<\/b><\/p>\n<p><span style=\"font-weight: 400;\">To begin using YouTube Data API you have to create a project at Google Cloud Console:<\/span><\/p>\n<p><b>Step 1 : Open Google Cloud Console<\/b><\/p>\n<p><span style=\"font-weight: 400;\">At the top, click Select a Project and select New Project.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Add a name for your project and some directory location.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Click &#8220;Create&#8221;. After the project is created, you&#8217;ll see it in your available projects.<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-1587 size-full\" src=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/create_project.png\" alt=\"\" width=\"739\" height=\"493\" title=\"\" srcset=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/create_project.png 739w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/create_project-300x200.png 300w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/create_project-624x416.png 624w\" sizes=\"auto, (max-width: 739px) 100vw, 739px\" \/><\/p>\n<p><b>Step 2: Activate Youtube Data API<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Once you have created your project, you need to activate the YouTube Data API:<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Navigate to <\/span><b>APIs &amp; Services &gt; Library <\/b><span style=\"font-weight: 400;\">in Google Cloud Console.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Search, Click on YouTube Data API v3<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Click to insert it in your project.<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-1591 size-full\" src=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_demo.png\" alt=\"\" width=\"1919\" height=\"948\" title=\"\" srcset=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_demo.png 1919w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_demo-300x148.png 300w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_demo-1024x506.png 1024w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_demo-768x379.png 768w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_demo-1536x759.png 1536w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_demo-624x308.png 624w\" sizes=\"auto, (max-width: 1919px) 100vw, 1919px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">Click on <\/span><b>APIs &amp; Services &gt; Credentials<\/b><span style=\"font-weight: 400;\"> and generate some API credentials.<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-1593 size-full\" src=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/create_credentials.png\" alt=\"\" width=\"1918\" height=\"968\" title=\"\" srcset=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/create_credentials.png 1918w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/create_credentials-300x151.png 300w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/create_credentials-1024x517.png 1024w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/create_credentials-768x388.png 768w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/create_credentials-1536x775.png 1536w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/create_credentials-624x315.png 624w\" sizes=\"auto, (max-width: 1918px) 100vw, 1918px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">Click Create Credentials and select API key Make a copy of the key, because it will be needed in API calls.<\/span><\/p>\n<p><b>Step3: Install Required Libraries<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Now that you have your API key, You will need some python libraries to fetch data from the API and process it:<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We can install the<\/span><b> google-api-python-client <\/b><span style=\"font-weight: 400;\">library by executing:<\/span><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">pip install\u00a0 google-api-python-client<\/pre>\n<p><span style=\"font-weight: 400;\">This guide will assist you to authenticated with YouTube API and give you access to the video details. Now you can start building your YouTube scrappers!<\/span><\/p>\n<h1 id=\"YouTube Scraper Based On Keywords\"><b>Creating YouTube Scraper Based On Keywords\u00a0<\/b><\/h1>\n<p><span style=\"font-weight: 400;\">Define Your Search Parameters We can utilize the search endpoint of youtube to find videos by keyword. With the help of this end point, we can filter out our search results on various parameters e.g:<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0<\/span><b>q : <\/b><span style=\"font-weight: 400;\">The search string.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0<\/span><b>maxResults: <\/b><span style=\"font-weight: 400;\">Maximum number of results to return in a single request (up to 50).\u00a0<\/span><\/p>\n<p><b>type: <\/b><span style=\"font-weight: 400;\">When making a search request to the YouTube API, you can use the type parameter to specify exactly what kind of content you want in the results. This helps you control whether you get videos, channels, or playlists in response.\u00a0<\/span><\/p>\n<p><b>relevanceLanguage:<\/b><span style=\"font-weight: 400;\"> &#8220;en&#8221; for english\u00a0<\/span><\/p>\n<p><b>Taking location and Radius:<\/b><span style=\"font-weight: 400;\"> If you want to narrow down your search results to a specific area, you can use the location and radius settings.\u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Then we move to next step which is Writing a Keyword Based Scraping code:<\/span><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\">from googleapiclient.discovery import build\r\nimport csv\r\n\r\n# Define your API Key here\r\nAPI_KEY = 'erewrwer2h62JiHrpCCMGrewrwerwerwes'  # Replace with your actual API key\r\n# Build the YouTube API service\r\nyoutube = build('youtube', 'v3', developerKey=API_KEY)\r\n\r\n# Define the search function with location and language\r\ndef youtube_search(keyword, max_results=5, latitude=None, longitude=None, radius=\"50km\", language=\"en\"):\r\n    # Prepare parameters for location search if latitude and longitude are provided\r\n    search_params = {\r\n        \"part\": \"snippet\",\r\n        \"q\": keyword,\r\n        \"maxResults\": max_results,\r\n        \"type\": \"video\",\r\n        \"relevanceLanguage\": language  # Specify the relevance language\r\n    }\r\n    # Add location parameters if latitude and longitude are provided\r\n    if latitude is not None and longitude is not None:\r\n        search_params[\"location\"] = f\"{latitude},{longitude}\"\r\n        search_params[\"locationRadius\"] = radius\r\n\r\n    # Call the search.list method to retrieve results matching the keyword, location, and language\r\n    request = youtube.search().list(**search_params)\r\n    response = request.execute()\r\n    \r\n    # List to store video details for CSV\r\n    video_data = []\r\n\r\n\r\n    # Print important video details\r\n    for item in response.get('items', []):\r\n        video_id = item['id']['videoId']\r\n        snippet = item['snippet']\r\n        \r\n        # Extract 20 important data points\r\n        details = {\r\n            \"Title\": snippet.get(\"title\", \"N\/A\"),\r\n            \"Channel Name\": snippet.get(\"channelTitle\", \"N\/A\"),\r\n            \"Video URL\": f\"https:\/\/www.youtube.com\/watch?v={video_id}\",\r\n            \"Description\": snippet.get(\"description\", \"N\/A\"),\r\n            \"Publish Date\": snippet.get(\"publishedAt\", \"N\/A\"),\r\n            \"Channel ID\": snippet.get(\"channelId\", \"N\/A\"),\r\n            \"Video ID\": video_id,\r\n            \"Thumbnail URL\": snippet.get(\"thumbnails\", {}).get(\"high\", {}).get(\"url\", \"N\/A\"),\r\n            \"Location Radius\": radius,\r\n            \"Relevance Language\": language,\r\n            \"Latitude\": latitude if latitude else \"N\/A\",\r\n            \"Longitude\": longitude if longitude else \"N\/A\",\r\n        \r\n        }\r\n\r\n        # Append details to video_data for saving to CSV\r\n        video_data.append(details)\r\n\r\n        # Print the extracted details\r\n        print(\"nVideo Details:\")\r\n        for key, value in details.items():\r\n            print(f\"{key}: {value}\")\r\n    \r\n    # Save video details to a CSV file\r\n    with open('youtube_videos.csv', 'w', newline='', encoding='utf-8') as csvfile:\r\n        fieldnames = video_data[0].keys()\r\n        writer = csv.DictWriter(csvfile, fieldnames=fieldnames)\r\n        writer.writeheader()\r\n        writer.writerows(video_data)\r\n\r\n    print(\"Video details saved to youtube_videos.csv\")\r\n\r\n# Example usage: Search for videos by keyword, location, and language\r\n# Location: San Francisco (latitude: 37.7749, longitude: -122.4194), Language: English\r\nyoutube_search(\"Python tutorial\", max_results=50, latitude=37.7749, longitude=-122.4194, radius=\"50km\", language=\"en\")\r\n<\/pre>\n<p><b>Explanation of the Code:<\/b><\/p>\n<p><b>youtube_search<\/b><span style=\"font-weight: 400;\"> This function search videos based on keyword and also passes additional parameters like location,language etc. It also pulls out important data points like video title, channel name, video URL, description and date published.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">here<\/span><b> video_data<\/b><span style=\"font-weight: 400;\"> is initialized with an empty list in which a dictionary containing details about each video will be saved. After obtaining all of the results, they are then saved in a CSV file youtube_videos.csv using python built in csv module.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The CSV further comprises the rows with columns such as Title, Channel Name, Video URL, etc. that enable analysis. It allows you to store analysis and use powerful tools to analyze YouTube data, for further processing or distribution. This is how it looked like :<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-1582 size-full\" src=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_api_search_keyword.png\" alt=\"youtube_api_search_keyword\" width=\"1919\" height=\"911\" title=\"\" srcset=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_api_search_keyword.png 1919w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_api_search_keyword-300x142.png 300w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_api_search_keyword-1024x486.png 1024w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_api_search_keyword-768x365.png 768w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_api_search_keyword-1536x729.png 1536w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_api_search_keyword-624x296.png 624w\" sizes=\"auto, (max-width: 1919px) 100vw, 1919px\" \/><\/p>\n<h1 id=\"Detailed Video Stats via YouTube API\"><b>Getting Detailed Video Stats via YouTube API<\/b><\/h1>\n<p><span style=\"font-weight: 400;\">In this part, we will discuss collecting more detailed information using Youtube Data API for each video. If you analyze the content well, then this data is worth using to find out how long a person watches your video or in general individual insights on videos.\u00a0<\/span><\/p>\n<p><b>Find Video ID:<\/b><span style=\"font-weight: 400;\"> This means that in order to find the information for a video we will need that video id.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Each YouTube video has a unique identifying number that exists at the end of the current URL.<\/span><\/p>\n<p><b>Link: <\/b><span style=\"font-weight: 400;\">https:\/\/www.youtube.com\/watch?v=_uQrJ0TkZlc\u00a0<\/span><\/p>\n<p><b>Video ID:<\/b><span style=\"font-weight: 400;\"> _uQrJ0TkZlc\u00a0<\/span><\/p>\n<h1 id=\"Get the video\u2019s data via the API\"><b>Get the video&#8217;s data via the API<\/b><\/h1>\n<p><span style=\"font-weight: 400;\">We will proceed to use the videos endpoint of the YouTube API as this endpoint provides us with multiple information about every video like<\/span><\/p>\n<ul>\n<li><b>Title and description<\/b><\/li>\n<li><b>Tags used by the creator<\/b><\/li>\n<li><b>views,\u00a0 likes and comments<\/b><\/li>\n<li><b>Date of publication, length and quality of the video<\/b><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">The following code retrieves this data. The code includes a function that helps you input any video URL by extracting the ID from Full You Tube URL.<\/span><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">from googleapiclient.discovery import build\r\nimport re, csv\r\n\r\n# Define your API Key here\r\nAPI_KEY = 'dfdfdfdadasdasdQehkDsdsdMGgeaIs'  # Replace with your actual API key\r\n\r\n# Build the YouTube API service\r\nyoutube = build('youtube', 'v3', developerKey=API_KEY)\r\n\r\n# Function to extract video ID from a YouTube URL\r\ndef extract_video_id(url):\r\n    # Regular expression to match YouTube video ID\r\n    pattern = r\"(?:v=|\/)([0-9A-Za-z_-]{11}).*\"\r\n    match = re.search(pattern, url)\r\n    if match:\r\n        return match.group(1)\r\n    return None\r\n\r\n# Function to get video details\r\ndef get_video_details(url):\r\n    video_id = extract_video_id(url)\r\n    if not video_id:\r\n        print(\"Invalid video URL\")\r\n        return\r\n\r\n    # Call the videos.list method to retrieve video details\r\n    request = youtube.videos().list(\r\n        part=\"snippet,contentDetails,statistics\",\r\n        id=video_id\r\n    )\r\n    response = request.execute()\r\n \r\n    # Check if the video exists\r\n    if \"items\" not in response or not response[\"items\"]:\r\n        print(\"Video not found.\")\r\n        return\r\n    \r\n    # Extract video details\r\n    # Parsing and displaying important video details\r\n    video = response[\"items\"][0]\r\n\r\n    details = {\r\n        \"Title\": video[\"snippet\"][\"title\"],\r\n        \"Channel Name\": video[\"snippet\"][\"channelTitle\"],\r\n        \"Published At\": video[\"snippet\"][\"publishedAt\"],\r\n        \"Description\": video[\"snippet\"][\"description\"],\r\n        \"Views\": video[\"statistics\"].get(\"viewCount\", \"N\/A\"),\r\n        \"Likes\": video[\"statistics\"].get(\"likeCount\", \"N\/A\"),\r\n        \"Comments\": video[\"statistics\"].get(\"commentCount\", \"N\/A\"),\r\n        \"Duration\": video[\"contentDetails\"][\"duration\"],\r\n        \"Tags\": ', '.join(video[\"snippet\"].get(\"tags\", [])),\r\n        \"Category ID\": video[\"snippet\"][\"categoryId\"],\r\n        \"Default Language\": video[\"snippet\"].get(\"defaultLanguage\", \"N\/A\"),\r\n        \"Dimension\": video[\"contentDetails\"][\"dimension\"],\r\n        \"Definition\": video[\"contentDetails\"][\"definition\"],\r\n        \"Captions Available\": video[\"contentDetails\"][\"caption\"],\r\n        \"Licensed Content\": video[\"contentDetails\"][\"licensedContent\"]\r\n    }\r\n\r\n    # Displaying the details\r\n    print(details)\r\n    \r\n    # Save details to CSV\r\n    with open('video_details.csv', 'w', newline='', encoding='utf-8') as csvfile:\r\n        writer = csv.DictWriter(csvfile, fieldnames=details.keys())\r\n        writer.writeheader()\r\n        writer.writerow(details)\r\n \r\n    print(\"Video details saved to video_details.csv\")\r\n\r\n \r\n# Example usage\r\nget_video_details(\"https:\/\/www.youtube.com\/watch?v=_uQrJ0TkZlc\")\r\n<\/pre>\n<p><b>Explanation of the Code:<\/b><\/p>\n<p><b>Initialize API: <\/b><span style=\"font-weight: 400;\">This<b> <code class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">build('youtube', 'v3', developerKey=API_KEY)<\/code><\/b><\/span><span style=\"font-weight: 400;\">line of the code initializes the You Tube API passing through your api key.<\/span><\/p>\n<p><b>Extracting the Video ID : <\/b><span style=\"font-weight: 400;\">The<\/span> <strong>extract_video_id()<\/strong><span style=\"font-weight: 400;\"> function scans a given YouTube URL and returns the video ID (strings that are unique to each video) using a regular expression. This ID is required so that you can get your video info.<\/span><\/p>\n<p><b>Getting Video Information: <\/b><span style=\"font-weight: 400;\">The <strong>get_video_details()<\/strong> method <\/span><span style=\"font-weight: 400;\">Makes a call to the videos endpoint of YouTube API with video ID. <\/span><span style=\"font-weight: 400;\">Gets title, views, tags along with other information.<\/span><\/p>\n<p><b>Write CSV: <\/b><span style=\"font-weight: 400;\">This code stores YouTube video detail into a CSV file by opening up a file named <\/span><b>video_details. csv<\/b><span style=\"font-weight: 400;\"> in write mode. It then sets up a csv. Then passes the header as a list of column names to<\/span><b> DictWriter <\/b><span style=\"font-weight: 400;\">then writes the column headers from the keys of details. The <\/span><b>writeheader()<\/b><span style=\"font-weight: 400;\"> method writes these headers to the file, after which the writer writes the actual video details. <\/span><b>\u00a0<\/b><span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-1583 size-full\" src=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_api_search_vedio_details.png\" alt=\"\" width=\"1919\" height=\"1010\" title=\"\" srcset=\"https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_api_search_vedio_details.png 1919w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_api_search_vedio_details-300x158.png 300w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_api_search_vedio_details-1024x539.png 1024w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_api_search_vedio_details-768x404.png 768w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_api_search_vedio_details-1536x808.png 1536w, https:\/\/rayobyte.com\/community\/wp-content\/uploads\/2024\/11\/youtube_api_search_vedio_details-624x328.png 624w\" sizes=\"auto, (max-width: 1919px) 100vw, 1919px\" \/><\/p>\n<h1 id=\"API Rate Limitations\"><b>API Rate Limitations<\/b><\/h1>\n<p><span style=\"font-weight: 400;\">If you want to use the official YouTube API, please remember its rate limitation. Due to the daily request quota system YouTube applies, there is a limit on how much your project can call per day. Here\u2019s how it works:<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">\u00a0Every project gets 10,000 quota units per day.<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Quota units are used up in different amounts for various types of API requests. For example:<\/span><\/li>\n<\/ol>\n<ol>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Every search request costs around ~100 units.<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">\u00a0Video detail requests (getting a videos title, description etc.) normally equivalent to 1 unit.<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">\u00a0Retrieving comments are charged at 2 or more units.<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">Going over the daily limit may lead to your requests being blocked until the quota resets. To address this, you should always try to optimize your requests so that you are only retrieving the information that you need and if at all possible, batching queries.<\/span><\/p>\n<h1 id=\"Youtube Scraping without API\"><b>Youtube Scraping without API: Selenium Stealth<\/b><\/h1>\n<p><span id=\"Selenium Stealth\" style=\"font-weight: 400;\">When the YouTube API does not provide all the data we are looking for, we can use web scraping. On the other hand, we will Selenium with Stealth so it passes YouTube bot detection. In short, it involves loading the YouTube page and obtaining video information directly from JSON data that is embedded in the header of the web page, and if necessary scrape Related videos by automating login.<\/span><\/p>\n<h1 id=\"JSON-LD Data\"><b>JSON-LD Data: Extracting Video Details<\/b><\/h1>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\">import json\r\nimport time\r\nimport re\r\nfrom selenium import webdriver\r\nfrom selenium_stealth import stealth\r\nfrom selenium.webdriver.common.by import By\r\nfrom selenium.webdriver.support.ui import WebDriverWait\r\nfrom selenium.webdriver.support import expected_conditions as EC\r\n \r\n\r\n\r\n# Initialize Selenium WebDriver\r\noptions = webdriver.ChromeOptions()\r\noptions.add_argument(\"start-maximized\")\r\n\r\n# options.add_argument(\"--headless\")\r\n\r\noptions.add_experimental_option(\"excludeSwitches\", [\"enable-automation\"])\r\noptions.add_experimental_option('useAutomationExtension', False)\r\n\r\n \r\n\r\ndriver = webdriver.Chrome(options=options)\r\n\r\n# Function to extract video details using JSON-LD data with regex\r\ndef get_video_details(url):\r\n    # Apply Selenium Stealth to avoid detection\r\n    stealth(\r\n        driver,\r\n        languages=[\"en-US\", \"en\"],\r\n        vendor=\"Google Inc.\",\r\n        platform=\"Win32\",\r\n        webgl_vendor=\"Intel Inc.\",\r\n        renderer=\"Intel Iris OpenGL Engine\",\r\n        fix_hairline=True,\r\n    )\r\n\r\n    driver.get(url)\r\n    time.sleep(5) \r\n\r\n    # Extract the page source\r\n    page_source = driver.page_source\r\n\r\n    # Use regex to find the JSON-LD data for VideoObject\r\n    match = re.search(r'({[^}]+\"@type\":\"VideoObject\"[^}]+})', page_source)\r\n    if not match:\r\n        print(\"No JSON-LD data found.\")\r\n        return\r\n\r\n    # Parse JSON-LD data\r\n    json_data = json.loads(match.group(1))\r\n\r\n    # Extract the top 20 most important video details\r\n    details = {\r\n        \"Title\": json_data.get(\"name\", \"N\/A\"),\r\n        \"Description\": json_data.get(\"description\", \"N\/A\"),\r\n        \"Duration\": json_data.get(\"duration\", \"N\/A\"),\r\n        \"Embed URL\": json_data.get(\"embedUrl\", \"N\/A\"),\r\n        \"Views\": json_data.get(\"interactionCount\", \"N\/A\"),\r\n        \"Thumbnail URL\": json_data.get(\"thumbnailUrl\", [\"N\/A\"])[0],\r\n        \"Upload Date\": json_data.get(\"uploadDate\", \"N\/A\"),\r\n        \"Genre\": json_data.get(\"genre\", \"N\/A\"),\r\n        \"Channel Name\": json_data.get(\"author\", \"N\/A\"),\r\n        \"Context\": json_data.get(\"@context\", \"N\/A\"),\r\n        \"Type\": json_data.get(\"@type\", \"N\/A\"),\r\n      \r\n    }\r\n\r\n    # Print the extracted details\r\n    for key, value in details.items():\r\n        print(f\"{key}: {value}\")\r\n\r\n   \r\n\r\n# Example usage\r\nget_video_details(\"https:\/\/www.youtube.com\/watch?v=_uQrJ0TkZlc\")\r\n \r\n\r\n<\/pre>\n<p><b>Explanation of the Code:<\/b><\/p>\n<p><b>Stealth:<\/b><span style=\"font-weight: 400;\"> We use stealth library for bypass bot detection.\u00a0<\/span><\/p>\n<p><b>Loading the Page: <\/b><span style=\"font-weight: 400;\">The <code class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">driver.get(url)<\/code><\/span><span style=\"font-weight: 400;\">command opens the page and it gives a 5-second wait so that page is fully loaded.<\/span><\/p>\n<p><b>Extracting JSON-LD Data:<\/b><\/p>\n<p><span style=\"font-weight: 400;\">We will use <\/span><b>page_source<\/b><span style=\"font-weight: 400;\"> to get the complete HTML of our page.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We define regular expression to match JSON-LD metadata of type <code class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">\"type\":\"VideoObject\"<\/code>\u00a0<\/span><span style=\"font-weight: 400;\">If no match is found, an error message is shown. Otherwise, the data is parsed into a JSON object for further use<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Finally, we retrieve and display some information about the video (title, description, duration, views. etc).<\/span><\/p>\n<h1 id=\"Related Videos\"><b>Related Videos (Using Default Browser Profile)<\/b><\/h1>\n<p><span style=\"font-weight: 400;\">We can efficiently scrape related videos by using a pre-logged-in Chrome profile to skip automated login. Here\u2019s the approach:<\/span><\/p>\n<p><b>Utilizing Browser Profile:<\/b><span style=\"font-weight: 400;\"> We passed in the Chrome user-data-dir and profile-directory flags to a profile that was already logged in YouTube, skipping all additional logging steps.<\/span><\/p>\n<p><b>Passing Profile to Selenium<\/b><span style=\"font-weight: 400;\">:\u00a0 When we load this profile in selenium, we will hit a related video part of Youtube. Doing this by simulating clicks on the &#8220;next&#8221; and &#8220;related&#8221; buttons.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This reduces chances of it getting detected as a bot, but will also need to be changed when the layout of the page changes on YouTube.<\/span><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\"># Specify the Chrome user data directory up to \"User Data\" only\r\noptions.add_argument(r\"user-data-dir=C:UsersfarhanAppDataLocalGoogleChromeUser Data\")\r\n\r\n# Specify the profile directory (e.g., \"Profile 17\")\r\noptions.add_argument(\"profile-directory=Profile 17\")<\/pre>\n<p>here is full code:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\">import json\r\nimport time\r\nimport re\r\nfrom selenium import webdriver\r\nfrom selenium_stealth import stealth\r\nfrom selenium.webdriver.common.by import By\r\nfrom selenium.webdriver.support.ui import WebDriverWait\r\nfrom selenium.webdriver.support import expected_conditions as EC\r\nimport csv \r\n\r\n\r\n# Initialize Selenium WebDriver\r\noptions = webdriver.ChromeOptions()\r\noptions.add_argument(\"start-maximized\")\r\n\r\n# options.add_argument(\"--headless\")\r\n\r\noptions.add_experimental_option(\"excludeSwitches\", [\"enable-automation\"])\r\noptions.add_experimental_option('useAutomationExtension', False)\r\n\r\n# Specify the Chrome user data directory up to \"User Data\" only\r\noptions.add_argument(r\"user-data-dir=C:UsersfarhanAppDataLocalGoogleChromeUser Data\")\r\n\r\n# Specify the profile directory (e.g., \"Profile 17\")\r\noptions.add_argument(\"profile-directory=Profile 17\")\r\n\r\ndriver = webdriver.Chrome(options=options)\r\n\r\n# Function to extract video details using JSON-LD data with regex\r\ndef get_video_details(url):\r\n    # Apply Selenium Stealth to avoid detection\r\n    stealth(\r\n        driver,\r\n        languages=[\"en-US\", \"en\"],\r\n        vendor=\"Google Inc.\",\r\n        platform=\"Win32\",\r\n        webgl_vendor=\"Intel Inc.\",\r\n        renderer=\"Intel Iris OpenGL Engine\",\r\n        fix_hairline=True,\r\n    )\r\n\r\n    driver.get(url)\r\n    time.sleep(5) \r\n\r\n    # Extract the page source\r\n    page_source = driver.page_source\r\n\r\n    # Use regex to find the JSON-LD data for VideoObject\r\n    match = re.search(r'({[^}]+\"@type\":\"VideoObject\"[^}]+})', page_source)\r\n    if not match:\r\n        print(\"No JSON-LD data found.\")\r\n        return\r\n\r\n    # Parse JSON-LD data\r\n    json_data = json.loads(match.group(1))\r\n\r\n    # Extract the top 20 most important video details\r\n    details = {\r\n        \"Title\": json_data.get(\"name\", \"N\/A\"),\r\n        \"Description\": json_data.get(\"description\", \"N\/A\"),\r\n        \"Duration\": json_data.get(\"duration\", \"N\/A\"),\r\n        \"Embed URL\": json_data.get(\"embedUrl\", \"N\/A\"),\r\n        \"Views\": json_data.get(\"interactionCount\", \"N\/A\"),\r\n        \"Thumbnail URL\": json_data.get(\"thumbnailUrl\", [\"N\/A\"])[0],\r\n        \"Upload Date\": json_data.get(\"uploadDate\", \"N\/A\"),\r\n        \"Genre\": json_data.get(\"genre\", \"N\/A\"),\r\n        \"Channel Name\": json_data.get(\"author\", \"N\/A\"),\r\n        \"Context\": json_data.get(\"@context\", \"N\/A\"),\r\n        \"Type\": json_data.get(\"@type\", \"N\/A\"),\r\n        \"Related URLs\": []  # Initialize as an empty list\r\n    }\r\n\r\n    # Print the extracted details\r\n    for key, value in details.items():\r\n        print(f\"{key}: {value}\")\r\n\r\n    try:\r\n        while True:\r\n            time.sleep(3)\r\n            # Loop to click the \"Next\" arrow until the \"Related\" button is visible\r\n            # Click the \"Next\" arrow if the \"Related\" button isn't found\r\n            next_arrow = WebDriverWait(driver, 2).until(\r\n                        EC.element_to_be_clickable((By.XPATH, \"\/\/div[@id='right-arrow-button']\/\/button\"))\r\n                    )\r\n            next_arrow.click()\r\n            time.sleep(3)  # Short delay to allow elements to load\r\n    \r\n            # Try to locate the \"Related\" button\r\n            related_button = driver.find_element(By.XPATH, \"\/\/yt-chip-cloud-chip-renderer[.\/\/yt-formatted-string[@title='Related']]\")\r\n            if related_button.is_displayed():\r\n                    related_button.click()\r\n                    print(\"Clicked on the 'Related' button.\")\r\n                    time.sleep(3)\r\n                    all_related_vedio_url = r'yt-simple-[^&gt;]+video-renderer[^&gt;]+href=\"([^\"]+)'\r\n                    urls = re.findall(all_related_vedio_url,page_source)\r\n                    # Add the related URLs to the list in `details`\r\n                    details[\"Related URLs\"].extend([f\"https:\/\/www.youtube.com{url}\" for url in urls])\r\n                    for url in urls:\r\n                        print(f\"https:\/\/www.youtube.com{url}\") \r\n                    break\r\n\r\n    except Exception as e:\r\n        print(\"Could not find or click the 'Related' button:\", e)\r\n    \r\n    # Join related URLs as a single string separated by commas\r\n    details[\"Related URLs\"] = \", \".join(details[\"Related URLs\"])\r\n\r\n    # Save details to CSV\r\n    with open('video_details.csv', 'w', newline='', encoding='utf-8') as csvfile:\r\n        writer = csv.DictWriter(csvfile, fieldnames=details.keys())\r\n        writer.writeheader()\r\n        writer.writerow(details)\r\n\r\n# Example usage\r\nget_video_details(\"https:\/\/www.youtube.com\/watch?v=_uQrJ0TkZlc\")\r\n \r\n\r\n<\/pre>\n<h1 id=\"Bypassing Bot Detection with Proxies\"><b>Bypassing Bot Detection with Proxies<\/b><\/h1>\n<p><span style=\"font-weight: 400;\">When you&#8217;re scraping websites like YouTube, one of the biggest challenges is avoiding detection and getting blocked. That&#8217;s where proxies come in handy.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Proxy mask your real IP address and make it seem like you&#8217;re browsing from a completely different location. This makes it much harder for websites to tell that it&#8217;s a bot behind the screen.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0In most of my tutorials, I use Rayobyte proxies because they\u2019re pretty reliable, but honestly, you can choose any proxy service that works for you. The important part is to keep things looking natural, spread your requests out, and make sure you\u2019re not sending too many too quickly. Here, I demonstrate how you can easily integrate a proxy.<br \/>\n<\/span><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\">import json\r\nimport time\r\nimport re\r\nfrom selenium import webdriver\r\nfrom selenium_stealth import stealth\r\nfrom selenium.webdriver.common.by import By\r\nfrom selenium.webdriver.support.ui import WebDriverWait\r\nfrom selenium.webdriver.support import expected_conditions as EC\r\nimport csv \r\n\r\n\r\n# Function to create proxy authentication extension\r\ndef create_proxy_auth_extension(proxy_host, proxy_user, proxy_pass):\r\n    import zipfile\r\n    import os\r\n\r\n    # Separate the host and port\r\n    host = proxy_host.split(':')[0]  # Extract the host part (e.g., \"la.residential.rayobyte.com\")\r\n    port = proxy_host.split(':')[1]  # Extract the port part (e.g., \"8000\")\r\n\r\n    # Define proxy extension files\r\n    manifest_json = \"\"\"\r\n    {\r\n        \"version\": \"1.0.0\",\r\n        \"manifest_version\": 2,\r\n        \"name\": \"Chrome Proxy\",\r\n        \"permissions\": [\r\n            \"proxy\",\r\n            \"tabs\",\r\n            \"unlimitedStorage\",\r\n            \"storage\",\r\n            \"&lt;all_urls&gt;\",\r\n            \"webRequest\",\r\n            \"webRequestBlocking\"\r\n        ],\r\n        \"background\": {\r\n            \"scripts\": [\"background.js\"]\r\n        },\r\n        \"minimum_chrome_version\":\"22.0.0\"\r\n    }\r\n    \"\"\"\r\n    \r\n    background_js = f\"\"\"\r\n    var config = {{\r\n            mode: \"fixed_servers\",\r\n            rules: {{\r\n              singleProxy: {{\r\n                scheme: \"http\",\r\n                host: \"{host}\",\r\n                port: parseInt({port})\r\n              }},\r\n              bypassList: [\"localhost\"]\r\n            }}\r\n          }};\r\n    chrome.proxy.settings.set({{value: config, scope: \"regular\"}}, function() {{}});\r\n    chrome.webRequest.onAuthRequired.addListener(\r\n        function(details) {{\r\n            return {{\r\n                authCredentials: {{\r\n                    username: \"{proxy_user}\",\r\n                    password: \"{proxy_pass}\"\r\n                }}\r\n            }};\r\n        }},\r\n        {{urls: [\"&lt;all_urls&gt;\"]}},\r\n        [\"blocking\"]\r\n    );\r\n    \"\"\"\r\n\r\n    # Create the extension\r\n    pluginfile = 'proxy_auth_plugin.zip'\r\n    with zipfile.ZipFile(pluginfile, 'w') as zp:\r\n        zp.writestr(\"manifest.json\", manifest_json)\r\n        zp.writestr(\"background.js\", background_js)\r\n\r\n    return pluginfile\r\n \r\n\r\n# Proxy configuration\r\nproxy_server = \"server_name:port\"  # Replace with your proxy server and port\r\nproxy_username = \"username\"  # Replace with your proxy username\r\nproxy_password = \"password\"  # Replace with your proxy password\r\n\r\n\r\n# Initialize Selenium WebDriver\r\noptions = webdriver.ChromeOptions()\r\noptions.add_argument(\"start-maximized\")\r\noptions.add_argument(f'--proxy-server={proxy_server}')\r\n# options.add_argument(\"--headless\")\r\n\r\noptions.add_experimental_option(\"excludeSwitches\", [\"enable-automation\"])\r\noptions.add_experimental_option('useAutomationExtension', False)\r\n\r\n# Add proxy authentication if necessary (for proxies that require username\/password)\r\nif proxy_username and proxy_password:\r\n        # Chrome does not support proxy authentication directly; use an extension for proxy authentication\r\n        options.add_extension(create_proxy_auth_extension(proxy_server, proxy_username, proxy_password))\r\n\r\ndriver = webdriver.Chrome(options=options)\r\n\r\n# Function to extract video details using JSON-LD data with regex\r\ndef get_video_details(url):\r\n    # Apply Selenium Stealth to avoid detection\r\n    stealth(\r\n        driver,\r\n        languages=[\"en-US\", \"en\"],\r\n        vendor=\"Google Inc.\",\r\n        platform=\"Win32\",\r\n        webgl_vendor=\"Intel Inc.\",\r\n        renderer=\"Intel Iris OpenGL Engine\",\r\n        fix_hairline=True,\r\n    )\r\n\r\n    driver.get(url)\r\n    time.sleep(5) \r\n\r\n    # Extract the page source\r\n    page_source = driver.page_source\r\n\r\n    # Use regex to find the JSON-LD data for VideoObject\r\n    match = re.search(r'({[^}]+\"@type\":\"VideoObject\"[^}]+})', page_source)\r\n    if not match:\r\n        print(\"No JSON-LD data found.\")\r\n        return\r\n\r\n    # Parse JSON-LD data\r\n    json_data = json.loads(match.group(1))\r\n\r\n    # Extract the top 20 most important video details\r\n    details = {\r\n        \"Title\": json_data.get(\"name\", \"N\/A\"),\r\n        \"Description\": json_data.get(\"description\", \"N\/A\"),\r\n        \"Duration\": json_data.get(\"duration\", \"N\/A\"),\r\n        \"Embed URL\": json_data.get(\"embedUrl\", \"N\/A\"),\r\n        \"Views\": json_data.get(\"interactionCount\", \"N\/A\"),\r\n        \"Thumbnail URL\": json_data.get(\"thumbnailUrl\", [\"N\/A\"])[0],\r\n        \"Upload Date\": json_data.get(\"uploadDate\", \"N\/A\"),\r\n        \"Genre\": json_data.get(\"genre\", \"N\/A\"),\r\n        \"Channel Name\": json_data.get(\"author\", \"N\/A\"),\r\n        \"Context\": json_data.get(\"@context\", \"N\/A\"),\r\n        \"Type\": json_data.get(\"@type\", \"N\/A\"),\r\n         \r\n    }\r\n\r\n    # Print the extracted details\r\n    for key, value in details.items():\r\n        print(f\"{key}: {value}\")\r\n\r\n \r\n\r\n    # Save details to CSV\r\n    with open('video_details.csv', 'w', newline='', encoding='utf-8') as csvfile:\r\n        writer = csv.DictWriter(csvfile, fieldnames=details.keys())\r\n        writer.writeheader()\r\n        writer.writerow(details)\r\n\r\n# Example usage\r\nget_video_details(\"https:\/\/www.youtube.com\/watch?v=_uQrJ0TkZlc\")\r\n \r\n\r\n\r\n<\/pre>\n<p><span style=\"font-weight: 400;\"> <a href=\"https:\/\/github.com\/MDFARHYN\/-YouTube-Scraper-Python\" rel=\"nofollow noopener\" target=\"_blank\">Download the full source code from GitHub<\/a>\u00a0<\/span><\/p>\n<p><a href=\"https:\/\/www.youtube.com\/watch?v=tXiD9XnCBXg\" rel=\"nofollow noopener\" target=\"_blank\">Watch the full tutorial on YouTube\u00a0<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Download the full source code from GitHub Table of content Introduction Getting started with the YouTube API Setup Google Cloud Project YouTube Scraper Based On&hellip;<\/p>\n","protected":false},"author":23,"featured_media":1585,"comment_status":"open","ping_status":"closed","template":"","meta":{"rank_math_lock_modified_date":false},"categories":[],"class_list":["post-1581","scraping_project","type-scraping_project","status-publish","has-post-thumbnail","hentry"],"_links":{"self":[{"href":"https:\/\/rayobyte.com\/community\/wp-json\/wp\/v2\/scraping_project\/1581","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rayobyte.com\/community\/wp-json\/wp\/v2\/scraping_project"}],"about":[{"href":"https:\/\/rayobyte.com\/community\/wp-json\/wp\/v2\/types\/scraping_project"}],"author":[{"embeddable":true,"href":"https:\/\/rayobyte.com\/community\/wp-json\/wp\/v2\/users\/23"}],"replies":[{"embeddable":true,"href":"https:\/\/rayobyte.com\/community\/wp-json\/wp\/v2\/comments?post=1581"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rayobyte.com\/community\/wp-json\/wp\/v2\/media\/1585"}],"wp:attachment":[{"href":"https:\/\/rayobyte.com\/community\/wp-json\/wp\/v2\/media?parent=1581"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rayobyte.com\/community\/wp-json\/wp\/v2\/categories?post=1581"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}