This blog will use the code extracting apartment data from the East Bay Area Craigslist. The code here can be changed to pull data from any category, region, property kind, etc.
We have checked the length and type of the item to ensure it matches the total posts on a page (120 there). Then we imported BeautifulSoup from the bs4, a module that can parse the web page HTML retrieved from a server. You can get our import statements with the setup code here:
Using find_all technique on a newly made html_soup variable quantity in the given code, we have found posts. We had to study a website's structure to get a parent tag about the posts. If you see the screenshot below, you can observe that this is
To scale that, ensure to work in the given way:
Class bs4.element.ResultSet gets indexed; therefore, we looked at the initial apartment by indexing the posts[0]. And it's all a code that belongs to
The pricing of this post is easy to get:
We scraped the time and date by stipulating the attributes' datetime' on the class 'result-date.' By specifying a 'datetime' attribute, We saved the step in cleaning data by making that needless to convert that attribute from the string to datetime objects. It might also be done into the one-liner by positioning ['datetime'] at the end of the .find() call; however, we split that into the two lines to get clarity.
The post title and URL are accessible as a 'href' attribute is a link, which is pulled by stipulating the argument. And the title is the text of the tag.
Total square footage and bedrooms are in similar tags; therefore, we split those values and grasped everyone element-wise. A neighborhood is a tag having class "result-hood"; consequently, we scraped the text from that.
The following block is a loop for different pages for East Bay. As there isn't always data on the square footage with total bedrooms, we built the series of statements surrounded within a loop for handling all cases
The loop starts on the initial page, and for every post on the page, this works as the given logic:
We have included some web cleaning steps in a loop, including pulling 'datetime' attributes and removing 'ft2' from square footage variables, and making the value an integer. We have removed 'br' from the total bedrooms because we have extracted it. That's how we have started cleaning data with a few works already completed. From the given options, elegant code is the finest option! We must do more; however, the code might become very specific to the region and could not work in areas.
The given code makes a data frame from lists of different values!
Wonderful! Here it is. Undoubtedly, there is a bit of data cleaning to get done. We will go through genuine quicks, and it's time to search for data!
Sadly, after removing duplicate URLs, we saw only 120 instances. Those numbers will be different if you run a code, as there would be various posts at various times of data scraping. There were around 20 posts that didn't get square footage or bedrooms listed also. For statistical details, that isn't a far-fetched data set; however, we have taken note of it and pushed it forward.
We wanted to observe the price distribution for East Bay; therefore, we made the given plot. Using the .describe() technique, we got a more comprehensive look. The lowest place is $850, while the most exclusive is $4,800.
The subsequent code block produces a scatter plot in which points get colored by total bedrooms. It shows an understandable and clear stratification: we observe the point of layers clustered around any pricing with square footage, and with an increase in pricing and square footage, do total bedrooms.
The subsequent code block produces a scatter plot in which points get colored by total bedrooms. It shows an understandable and clear stratification: we observe the point of layers clustered around any pricing with square footage, and with an increase in pricing and square footage, do total bedrooms.
We have fitted the line on these two variables. Let's observe the correlations. We used eb_apts.corr() for getting these:
As assumed, the correlation is stronger between total bedrooms with square footage. It makes sense as square footage increases with the increase in total bedrooms.
We wanted to know how locations affect price, so we gathered by neighborhood and combined by calculating means for every variable.
We have produced it with single line code:
eb_apts.groupby('neighborhood').mean() where 'neighborhood' is the 'by=' argument, and an aggregator function indicates the mean.
We have noticed there are two places for North Oaklands: Oakland North and North Oakland, so we have recorded one for them in other likes so:
Scraping the pricing and sorting in ascending order shows the lowest and most exclusive places to live. A complete line of code is: eb_apts.groupby('neighborhood').mean()['price'].sort_values() which results in the given output:
Finally, we looked at spreading every neighborhood for price. By doing so, we saw how pricing in neighborhoods might differ and to what extent.
Here's a code that produces a plot that follows
Berkeley had an enormous range. It may be because it comprises Downtown Berkeley, South Berkeley, and West Berkeley. In the future form of the project, it can be essential to consider changing the scope of all the variables so they can be more thoughtful of price variability between neighborhoods in every city.
Well, that's it from us! Feel free to contact us if you want to know more. You can also reach us for all your mobile app scraping and web scraping services requirements.
Our web scraping expertise is relied on by 4,000+ global enterprises including Zomato, Tata Consumer, Subway, and Expedia — helping them turn web data into growth.
Watch how businesses like yours are using Actowiz data to drive growth.
From Zomato to Expedia — see why global leaders trust us with their data.
Backed by automation, data volume, and enterprise-grade scale — we help businesses from startups to Fortune 500s extract competitive insights across the USA, UK, UAE, and beyond.
We partner with agencies, system integrators, and technology platforms to deliver end-to-end solutions across the retail and digital shelf ecosystem.
Complete guide to scraping Shopify store data in 2026. Extract product prices, reviews, and inventory from Shopify stores for competitive intelligence.
Discover how Natural Grocers achieved a 23% increase in promotional ROI using real-time organic product pricing intelligence. Learn how data-driven pricing strategies enhance promotions and retail performance.
Track UK Grocery Products Daily Using Automated Data Scraping across Morrisons, Asda, Tesco, Sainsbury’s, Iceland, Co-op, Waitrose, and Ocado for insights.
Whether you're a startup or a Fortune 500 — we have the right plan for your data needs.