Category-wise packs with monthly refresh; export as CSV, ISON, or Parquet.
Pick cities/countries and fields; we deliver a tailored extract with OA.
Launch instantly with ready-made scrapers tailored for popular platforms. Extract clean, structured data without building from scratch.
Access real-time, structured data through scalable REST APIs. Integrate seamlessly into your workflows for faster insights and automation.
Download sample datasets with product titles, price, stock, and reviews data. Explore Q4-ready insights to test, analyze, and power smarter business strategies.
Playbook to win the digital shelf. Learn how brands & retailers can track prices, monitor stock, boost visibility, and drive conversions with actionable data insights.
We deliver innovative solutions, empowering businesses to grow, adapt, and succeed globally.
Collaborating with industry leaders to provide reliable, scalable, and cutting-edge solutions.
Find clear, concise answers to all your questions about our services, solutions, and business support.
Our talented, dedicated team members bring expertise and innovation to deliver quality work.
Creating working prototypes to validate ideas and accelerate overall business innovation quickly.
Connect to explore services, request demos, or discuss opportunities for business growth.
GeoIp2\Model\City Object ( [raw:protected] => Array ( [city] => Array ( [geoname_id] => 4509177 [names] => Array ( [de] => Columbus [en] => Columbus [es] => Columbus [fr] => Columbus [ja] => コロンバス [pt-BR] => Columbus [ru] => Колумбус [zh-CN] => 哥伦布 ) ) [continent] => Array ( [code] => NA [geoname_id] => 6255149 [names] => Array ( [de] => Nordamerika [en] => North America [es] => Norteamérica [fr] => Amérique du Nord [ja] => 北アメリカ [pt-BR] => América do Norte [ru] => Северная Америка [zh-CN] => 北美洲 ) ) [country] => Array ( [geoname_id] => 6252001 [iso_code] => US [names] => Array ( [de] => USA [en] => United States [es] => Estados Unidos [fr] => États Unis [ja] => アメリカ [pt-BR] => EUA [ru] => США [zh-CN] => 美国 ) ) [location] => Array ( [accuracy_radius] => 20 [latitude] => 39.9625 [longitude] => -83.0061 [metro_code] => 535 [time_zone] => America/New_York ) [postal] => Array ( [code] => 43215 ) [registered_country] => Array ( [geoname_id] => 6252001 [iso_code] => US [names] => Array ( [de] => USA [en] => United States [es] => Estados Unidos [fr] => États Unis [ja] => アメリカ [pt-BR] => EUA [ru] => США [zh-CN] => 美国 ) ) [subdivisions] => Array ( [0] => Array ( [geoname_id] => 5165418 [iso_code] => OH [names] => Array ( [de] => Ohio [en] => Ohio [es] => Ohio [fr] => Ohio [ja] => オハイオ州 [pt-BR] => Ohio [ru] => Огайо [zh-CN] => 俄亥俄州 ) ) ) [traits] => Array ( [ip_address] => 216.73.216.115 [prefix_len] => 22 ) ) [continent:protected] => GeoIp2\Record\Continent Object ( [record:GeoIp2\Record\AbstractRecord:private] => Array ( [code] => NA [geoname_id] => 6255149 [names] => Array ( [de] => Nordamerika [en] => North America [es] => Norteamérica [fr] => Amérique du Nord [ja] => 北アメリカ [pt-BR] => América do Norte [ru] => Северная Америка [zh-CN] => 北美洲 ) ) [locales:GeoIp2\Record\AbstractPlaceRecord:private] => Array ( [0] => en ) [validAttributes:protected] => Array ( [0] => code [1] => geonameId [2] => names ) ) [country:protected] => GeoIp2\Record\Country Object ( [record:GeoIp2\Record\AbstractRecord:private] => Array ( [geoname_id] => 6252001 [iso_code] => US [names] => Array ( [de] => USA [en] => United States [es] => Estados Unidos [fr] => États Unis [ja] => アメリカ [pt-BR] => EUA [ru] => США [zh-CN] => 美国 ) ) [locales:GeoIp2\Record\AbstractPlaceRecord:private] => Array ( [0] => en ) [validAttributes:protected] => Array ( [0] => confidence [1] => geonameId [2] => isInEuropeanUnion [3] => isoCode [4] => names ) ) [locales:protected] => Array ( [0] => en ) [maxmind:protected] => GeoIp2\Record\MaxMind Object ( [record:GeoIp2\Record\AbstractRecord:private] => Array ( ) [validAttributes:protected] => Array ( [0] => queriesRemaining ) ) [registeredCountry:protected] => GeoIp2\Record\Country Object ( [record:GeoIp2\Record\AbstractRecord:private] => Array ( [geoname_id] => 6252001 [iso_code] => US [names] => Array ( [de] => USA [en] => United States [es] => Estados Unidos [fr] => États Unis [ja] => アメリカ [pt-BR] => EUA [ru] => США [zh-CN] => 美国 ) ) [locales:GeoIp2\Record\AbstractPlaceRecord:private] => Array ( [0] => en ) [validAttributes:protected] => Array ( [0] => confidence [1] => geonameId [2] => isInEuropeanUnion [3] => isoCode [4] => names ) ) [representedCountry:protected] => GeoIp2\Record\RepresentedCountry Object ( [record:GeoIp2\Record\AbstractRecord:private] => Array ( ) [locales:GeoIp2\Record\AbstractPlaceRecord:private] => Array ( [0] => en ) [validAttributes:protected] => Array ( [0] => confidence [1] => geonameId [2] => isInEuropeanUnion [3] => isoCode [4] => names [5] => type ) ) [traits:protected] => GeoIp2\Record\Traits Object ( [record:GeoIp2\Record\AbstractRecord:private] => Array ( [ip_address] => 216.73.216.115 [prefix_len] => 22 [network] => 216.73.216.0/22 ) [validAttributes:protected] => Array ( [0] => autonomousSystemNumber [1] => autonomousSystemOrganization [2] => connectionType [3] => domain [4] => ipAddress [5] => isAnonymous [6] => isAnonymousProxy [7] => isAnonymousVpn [8] => isHostingProvider [9] => isLegitimateProxy [10] => isp [11] => isPublicProxy [12] => isResidentialProxy [13] => isSatelliteProvider [14] => isTorExitNode [15] => mobileCountryCode [16] => mobileNetworkCode [17] => network [18] => organization [19] => staticIpScore [20] => userCount [21] => userType ) ) [city:protected] => GeoIp2\Record\City Object ( [record:GeoIp2\Record\AbstractRecord:private] => Array ( [geoname_id] => 4509177 [names] => Array ( [de] => Columbus [en] => Columbus [es] => Columbus [fr] => Columbus [ja] => コロンバス [pt-BR] => Columbus [ru] => Колумбус [zh-CN] => 哥伦布 ) ) [locales:GeoIp2\Record\AbstractPlaceRecord:private] => Array ( [0] => en ) [validAttributes:protected] => Array ( [0] => confidence [1] => geonameId [2] => names ) ) [location:protected] => GeoIp2\Record\Location Object ( [record:GeoIp2\Record\AbstractRecord:private] => Array ( [accuracy_radius] => 20 [latitude] => 39.9625 [longitude] => -83.0061 [metro_code] => 535 [time_zone] => America/New_York ) [validAttributes:protected] => Array ( [0] => averageIncome [1] => accuracyRadius [2] => latitude [3] => longitude [4] => metroCode [5] => populationDensity [6] => postalCode [7] => postalConfidence [8] => timeZone ) ) [postal:protected] => GeoIp2\Record\Postal Object ( [record:GeoIp2\Record\AbstractRecord:private] => Array ( [code] => 43215 ) [validAttributes:protected] => Array ( [0] => code [1] => confidence ) ) [subdivisions:protected] => Array ( [0] => GeoIp2\Record\Subdivision Object ( [record:GeoIp2\Record\AbstractRecord:private] => Array ( [geoname_id] => 5165418 [iso_code] => OH [names] => Array ( [de] => Ohio [en] => Ohio [es] => Ohio [fr] => Ohio [ja] => オハイオ州 [pt-BR] => Ohio [ru] => Огайо [zh-CN] => 俄亥俄州 ) ) [locales:GeoIp2\Record\AbstractPlaceRecord:private] => Array ( [0] => en ) [validAttributes:protected] => Array ( [0] => confidence [1] => geonameId [2] => isoCode [3] => names ) ) ) )
country : United States
city : Columbus
US
Array ( [as_domain] => amazon.com [as_name] => Amazon.com, Inc. [asn] => AS16509 [continent] => North America [continent_code] => NA [country] => United States [country_code] => US )
In this blog, we will use Python, Twilio, and Heroku to extract data from a grocery website API and find a text notification while slots are available
We live in extraordinary times.
And with extraordinary times come different challenges. One such challenge was preserving grocery supply chains with millions of people under lockdown due to Covid-19. For vulnerable people who are isolated or unable to go to the supermarket physically, the only accessible option is booking a supermarket delivery time slot online. Though, with a massive demand for these services, it has become disreputably challenging to get an accessible slot- leaving many people nonstop logging in to check the slots.
That got us thinking about- the ever-increasing number of problems we face and how we could utilize Python to automate this procedure for me.
The initial step towards our objective of some 'automated time delivery slot checker' is finding how we could programmatically scrape data that we need from a grocery website.
After choosing ASDA as our grocery site, making an account, as well as inputting the delivery postcode, we arrive on a delivery slot page, given below.
Here we could see a precisely made table of times, dates, and accessibility of every slot. Naturally, all the slots are presently showing 'Sold Out.' However, we have prominence on the targeted data we need to get with the tool.
If you've done any data scraping before or used with web development, you'll get well-versed with in-built DevTools functionality to most important browsers. For those who are not, there is a set of tools that permit users to examine the webpage and study the CSS, HTML, JavaScript, and critically for project- metadata associated to network requests getting made to as well as from the server and webpages. The following step is perhaps the most important one.
With DevTools windows visible, we could start to see what's happening behind the scenes in the webpages to allow us to observe an updated table to do slot availability. Navigating a 'Network' tab of the DevTools window, we get access to all network requests made by the website to find the newest data displayed. Refreshing a webpage will produce a listing of requests, one of which must have the key to seeing where the slot accessibility data is coming from.
This listing may look a bit confusing because we would have a sea of various requests, collecting everything from CSS describing webpage formatting to JavaScript determining a website functionality. We are involved in collecting data to present on a webpage. So, filtering requests for those of kind 'XHR' (XMLHttpRequest) helps us to concentrate only on requesting data from a server, ignoring that focused around a webpage style. It still leaves few requests to get inspected; luckily, gambling that the required requests will have a word 'slot' narrows the search to four outstanding requests.
Click on the request and select a 'Response' tab that discloses a JSON response produced by request and, therefore, the data provided to a webpage. From that, we could very quickly observe that a request having data we are searching for is the POST request for the URL https://groceries.asda.com/api/v3/slot/view. Just look at the 'Params' tab; we can see JSON data provided by a browser in a POST request as well; as right-click and select 'Copy All' to copy JSON data into the clipboard means that we get everything that we have to describe to Python about how to collect data.
A Requests library of Python makes that very easy to make HTTP requests programmatically. From the given inspection of a website, we know the URL we want to send the request of, the kind of requests we want to utilize (POST), and the JSON data needed to send (presently stored in the clipboard).
Practically, it gives us a code shown below:
We have pasted JSON data from the clipboard and added an easy request, posting data to the URL with json argument of a request.post() technique. Our request responding object is stored in variable r to use later.
We have replaced some parameters also in data having variables. The start_date and end_date variables clear the dynamic range to an API because we are always interested in looking two weeks ahead of the current date. The strftime technique of datetime objects helps us stipulate the precise string format needed for date-time objects, which we could match with a format we reviewed in the early JSON data copy.
The stored parameters like os.environ variables are essential information that we don't wish to get publicly available on GitHub. Afterward, we would see how we could safely store these data to be shown in the scripts.
We now get a completely working Python script that we can utilize to send requests to Asda's API and store a response object we get. Let's observe the response object and discover how we could parse that to scrape the data we're searching for.
Our responsive object r has all data or metadata got back from the POST request with Asda's API. We first need to check if our request for the server was successful or if everything went wrong. To do that, we can examine a status_code attribute for the response objects.
Here, we have to double-check that the URL and data are correctly formatted. If it doesn't return 200, a request has gone wrong. The complete listing of possible HTTP codes could be available here, but generally, we will get the 200 code suggesting 'OK' and the 400/404 code to make 'Bad Request' and 'Not Found' correspondingly.
Presuming that we have the 200 status code, we are ready to review the data we have in response. As it is a standard view to get data in JSON format, requests come with the in-built JSON decoder.
Printing values of r.json to a terminal would quickly disclose that we have got big data back from a server associated with slot accessibility, pricing, capacity, etc. As we are mainly interested in slot accessibility for the project objective, we could loop through that JSON response and fill the dictionary with slots and accessibilities.
We initially loop through every slot day within two weeks that we have looked for, and within every day to study every individual slot, filling the dictionary :
Now as we get all data needed, and the way of programmatically extracting it when we wish, let's assume how we could set up a way to inform our end-users when the delivery slot gets available.
Twilio is a cloud communications platform providing APIs that allow developers to send and receive text messages and phone calls in projects and apps. It opens up the entire world of possibilities for auto SMS notifications, two-factor authentication, creating chatbots, etc. Here, we will make an easy text notification system, like we get the text details of any accessible delivery slots whenever the script runs.
Though Twilio is a paid service, they provide a free trial of about £13. To start with Twilio, we have to sign up on the website (no payment data needed) and select a phone number. When it is completed, Twilio will offer us the account SID with authentication token for a project. It is more than sufficient to find us started with the project- given it costs unevenly £0.08 to send the text.
When we all are set with the Twilio account, we could start using Python API provided by Twilio. A Twilio module done for Python could get installed just using pip.
A Twilio API used for Python is straightforward to start with, and so many documents are accessible at https://www.twilio.com/docs. For sending a text within our newly developed phone number, we need the following:
Including this in our script for getting accessible delivery slots, we could check data for accessible slots and, if they exist, send the text to phone numbers of our selection with the notification of our preference. It is outlined in the last segment of the script:
We get a complete script, allowing us to observe for accessible delivery slots with Asda and, if they are available, get a notification through text to inform us. The only enduring step in the project is to get a way of having a script running on its own as per the schedule.
Heroku is a cloud-computing platform allowing developers to deploy projects and apps to the cloud. It's beneficial to run web apps with the negligible set-up: making that perfect for individual projects. Here we would utilize Heroku as an easy way to get our script running at planned intervals.
You could sign up to start with Heroku here.
The initial step we have to take is creating a new app for housing our project:
To get the script up and running on the cloud, we have to create a new GitHub repository with our script. You can find ours here for your reference. We also have to make a file called requirements.txt. It will have all the package dependencies needed to tell Heroku to install before it can successfully run the script.
Then, we can connect the app with the GitHub repository created for this project. Allowing 'automatic deploys' suggests that while pushing to the main branch, the project would automatically deploy with the newest updates: which is helpful if we wish to continue the project's development while it is in production.
As mentioned earlier, several variables are in the script we wish to keep a secret. We could do that using 'Config vars' to set the Heroku app, an effortless way of storing sensitive data in the project that could easily get accessed like environment variables:
The last step is getting our script to work automatically on the schedule. To do that, we will have to install the add-on to the app. You can install the Heroku scheduler, which helps us run jobs every 10 minutes, hours, or days.
When we install the Heroku scheduler, we can create a new job that will permit us to select our scheduled frequency and the command we would love to run. As slots go very quickly, 10 minutes is the best for scheduled jobs. The run command is easy to run the Python script:
Now, we can sit and relax as well as wait for text notifications!
We have developed many skills with this project which has opened up the world with many possibilities for new projects:
Now, we can inspect a site with DevTools, reverse engineer an API, and utilize Python's request library to scrape data: it gives us the required skills to scrape data from nearly all publicly available websites.
We have a setup using Twilio, a communications API that helps us make calls and send texts. It provides an easy method of getting or sending notifications using the reader and also opens more possibilities for Twilio: alert systems, chatbots, robo-callers, and more.
We have deployed this project using Heroku, permitting scripts to run autonomously on the schedule on the cloud. An excellent skill to get, removing local dependencies of running scripts on the PC or laptop and providing a fantastic opportunity to showcase projects online. Thanks a lot for reading this blog!
To know more, contact Actowiz Solutions! You can also reach us for all your mobile app and web scraping service requirements.
✨ "1000+ Projects Delivered Globally"
⭐ "Rated 4.9/5 on Google & G2"
🔒 "Your data is secure with us. NDA available."
💬 "Average Response Time: Under 12 hours"
Look Back Analyze historical data to discover patterns, anomalies, and shifts in customer behavior.
Find Insights Use AI to connect data points and uncover market changes. Meanwhile.
Move Forward Predict demand, price shifts, and future opportunities across geographies.
Industry:
Coffee / Beverage / D2C
Result
2x Faster
Smarter product targeting
“Actowiz Solutions has been instrumental in optimizing our data scraping processes. Their services have provided us with valuable insights into our customer preferences, helping us stay ahead of the competition.”
Operations Manager, Beanly Coffee
✓ Competitive insights from multiple platforms
Real Estate
Real-time RERA insights for 20+ states
“Actowiz Solutions provided exceptional RERA Website Data Scraping Solution Service across PAN India, ensuring we received accurate and up-to-date real estate data for our analysis.”
Data Analyst, Aditya Birla Group
✓ Boosted data acquisition speed by 3×
Organic Grocery / FMCG
Improved
competitive benchmarking
“With Actowiz Solutions' data scraping, we’ve gained a clear edge in tracking product availability and pricing across various platforms. Their service has been a key to improving our market intelligence.”
Product Manager, 24Mantra Organic
✓ Real-time SKU-level tracking
Quick Commerce
Inventory Decisions
“Actowiz Solutions has greatly helped us monitor product availability from top three Quick Commerce brands. Their real-time data and accurate insights have streamlined our inventory management and decision-making process. Highly recommended!”
Aarav Shah, Senior Data Analyst, Mensa Brands
✓ 28% product availability accuracy
✓ Reduced OOS by 34% in 3 weeks
3x Faster
improvement in operational efficiency
“Actowiz Solutions' data scraping services have helped streamline our processes and improve our operational efficiency. Their expertise has provided us with actionable data to enhance our market positioning.”
Business Development Lead,Organic Tattva
✓ Weekly competitor pricing feeds
Beverage / D2C
Faster
Trend Detection
“The data scraping services offered by Actowiz Solutions have been crucial in refining our strategies. They have significantly improved our ability to analyze and respond to market trends quickly.”
Marketing Director, Sleepyowl Coffee
Boosted marketing responsiveness
Enhanced
stock tracking across SKUs
“Actowiz Solutions provided accurate Product Availability and Ranking Data Collection from 3 Quick Commerce Applications, improving our product visibility and stock management.”
Growth Analyst, TheBakersDozen.in
✓ Improved rank visibility of top products
Real results from real businesses using Actowiz Solutions
In Stock₹524
Price Drop + 12 minin 6 hrs across Lel.6
Price Drop −12 thr
Improved inventoryvisibility & planning
Actowiz's real-time scraping dashboard helps you monitor stock levels, delivery times, and price drops across Blinkit, Amazon: Zepto & more.
✔ Scraped Data: Price Insights Top-selling SKUs
"Actowiz's helped us reduce out of stock incidents by 23% within 6 weeks"
✔ Scraped Data, SKU availability, delivery time
With hourly price monitoring, we aligned promotions with competitors, drove 17%
Actionable Blogs, Real Case Studies, and Visual Data Stories -All in One Place
Discover how Mapping Product Taxonomy helps optimize 15+ product categories across Amazon, Walmart, and Target, ensuring better marketplace insights.
Actowiz Solutions scraped 50,000+ listings to scrape Diwali real estate discounts, compare festive property prices, and deliver data-driven developer insights.
Track how prices of sweets, snacks, and groceries surged across Amazon Fresh, BigBasket, and JioMart during Diwali & Navratri in India with Actowiz festive price insights.
This research report analyzes U.S. EV adoption and infrastructure trends using EV charging station data scraping from Tesla, Rivian, and ChargePoint.
Build and analyze Historical Real Estate Price Datasets to forecast housing trends, track decade-long price fluctuations, and make data-driven investment decisions.
Discover how Italian travel agencies use Trenitalia Data Scraping for Route Optimization to improve scheduling, efficiency, and enhance the overall customer experience.
Actowiz Solutions used scraping of 250K restaurant menus to reveal Diwali dining trends, top cuisines, festive discounts, and delivery insights across India.
Actowiz Solutions tracked Diwali Barbie resale prices and scarcity trends across Walmart, eBay, and Amazon to uncover collector insights and cross-market analytics.
Score big this Navratri 2025! Discover the top 5 brands offering the biggest clothing discounts and grab stylish festive outfits at unbeatable prices.
Discover the top 10 most ordered grocery items during Navratri 2025. Explore popular festive essentials for fasting, cooking, and celebrations.
Tracking Liquor Trends on Dan Murphy’s & BWS in Australia - Insights from Data Scraping & Sales Statistics, revealing market patterns.
Discover how Competitive Product Pricing on Tesco & Argos using data scraping uncovers 30% weekly price fluctuations in UK market for smarter retail decisions.
Benefit from the ease of collaboration with Actowiz Solutions, as our team is aligned with your preferred time zone, ensuring smooth communication and timely delivery.
Our team focuses on clear, transparent communication to ensure that every project is aligned with your goals and that you’re always informed of progress.
Actowiz Solutions adheres to the highest global standards of development, delivering exceptional solutions that consistently exceed industry expectations