Actowiz Solutions LLP ‘The Fastest Growing’ Big Data Analytics Company
Actowiz Solutions LLP ‘The Fastest Growing’ Big Data Analytics Company

Start Your Project with Us

Whatever your project size is, we will handle it well with all the standards fulfilled! We are here to give 100% satisfaction.

  • Any feature, you ask, we develop
  • 24x7 support worldwide
  • Real-time performance dashboard
  • Complete transparency
  • Dedicated account manager
  • Customized solutions to fulfill data scraping goals

This post will share how to scrape Google Things To Do using node.js code. You will learn how to prepare before starting the process to scrape Google Things to do, the primary process, the detailed explanation of code, and the final output.


Here is the entire code:

Here-is-the-entire-code.jpg Here-is-the-entire-code-2.jpg Here-is-the-entire-code-3.jpg


To start, we must create a Node.js project and then add Puppeteer-extra, puppeteer-extra-plugin-stealth, and Puppeteer packages to control Firefox, Chrome, or Chromium over DevTools protocols for any mode. Here, we will work in Chromium as a default browser.

For this, in our project directory, open write the command and enter:

$ npm init -y

And then:


If you haven't installed Node.js on your device, you can go to their official website, download it, and follow the documentation to install it.

Note: you can also you Puppeteer without installing any extension. But it would help if you used extensions with puppeteer-extra and puppeteer-extra-plugin-stealth to avoid website detection when you use headless Chromium or webdriver. To check this, you can explore the headless test website in Chrome. Observe the difference below.


We'll finish the setup of the Node.js environment to run our project, and let's proceed to go through the code steps.


We will scrape the data using HTML elements of the Google things to do page. You can get the correct CSS selectors quickly with the help of a chrome extension selector gadget that allows you to collect CSS selectors after clicking the required browser element. But it only works effectively sometimes, mainly when loads of JavaScript use the website.

The below Gif redirects to the way to select various parts of the output with the help of SelectorGadget.


Code Explanation

Decide Puppeteer control chromium from stealth plugin and Puppeteer extra library to avoid website detection.


Then, ask Puppeteer to use the stealth plugin and write a search inquiry to the URL.


Then write a function and find places on the webpage.


Using this function, we'll explore the below steps and properties to collect the preliminary information.


Firstly, we should scroll the webpage and load every thumbnail. For this step, get the page scroll Height, nominate the scroll iteration count, and then proceed with scrolling the page using for loop.


After that, collect and return every place data from the webpage with the help of evaluate() step.


Then, write the function to regulate the web browser and collect data from every category.


Here, we have to nominate a browser with the help of a puppeteer.launch() method using existing options like headless true and args ["--no-sandbox," "--disable-setuid-sandbox"].

The meaning of these options is that we utilize arguments with array and headless mode and enable the launching browser process using an online IDE. Then go to a new page, and open it.


Then, change the default time from 30 seconds to 60 seconds allowing selectors to wait for slow internet speed using .setDefaultNavigationTimeout() method and go to the URL with the help of .goto() process.


After that, we will wait till loading type=text selector with waitForSelector() command, then click on input and press searchquery keyboard.type() command. Enter the button using command, then hit the see all top sights button.


After this, we'll nominate the places object and add the information of places from the page to each key.


Then, we should collect each category from the webpage and get all the information of places from every category after clicking on each and setting to object key of places using the name of categories.

Then-we-should-collect-each-category.jpg then-we-should-collect-each-category-2.jpg

Finally, after receiving all the data, we will close the browser.


Then we can launch our tool to parse the data.



Output.jpg Output-2.jpg


Did you find it helpful? If you wish to learn more or want us to help you with web scraping services, contact Actowiz Solutions.