Proxy Market News - Proxyway https://proxyway.com/news Your Trusted Guide to All Things Proxy Wed, 15 Oct 2025 10:00:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://proxyway.com/wp-content/uploads/2023/04/favicon-150x150.png Proxy Market News - Proxyway https://proxyway.com/news 32 32 New Review: NodeMaven https://proxyway.com/news/new-review-nodemaven https://proxyway.com/news/new-review-nodemaven#respond Wed, 15 Oct 2025 10:00:11 +0000 https://proxyway.com/?post_type=news&p=38460 NodeMaven joins the ranks of our reviewed providers.

The post New Review: NodeMaven appeared first on Proxyway.

]]>

News

NodeMaven joins the ranks of our reviewed providers.

Adam Dubois
nodemaven news headline

NodeMaven’s target audience are multiple account managers. Instead of erecting a mobile dongle farm like most do, the provider takes a different approach: it filters a peer-to-peer network of residential IPs for uptime and quality. In theory, this should lead to a more consistent, yet still highly diverse proxy pool. 

Does NodeMaven succeed? In our limited tests, the quality filter did make a difference – but only when sticky sessions were involved. Regardless, we found NodeMaven’s service easy to use and its infrastructure performance competitive. As such, we’re giving this provider a score of 8.6.

You can find the full review here

The post New Review: NodeMaven appeared first on Proxyway.

]]>
https://proxyway.com/news/new-review-nodemaven/feed 0
OxyCon 2025: A Recap https://proxyway.com/news/oxycon-2025-recap https://proxyway.com/news/oxycon-2025-recap#respond Thu, 09 Oct 2025 09:31:52 +0000 https://proxyway.com/?post_type=news&p=38386 Our virtual impressions from Oxylabs' annual conference on web scraping.

The post OxyCon 2025: A Recap appeared first on Proxyway.

]]>

News

Our virtual impressions from Oxylabs’ sixth annual conference on web scraping.

Adam Dubois
oxycon 2025 main
OxyCon, one of the two major conferences in our field, flew by in an instant. If you didn’t manage to get on board, don’t worry – we watched it all and documented our impressions here. Oxylabs will make the talks available on demand, so you can quickly get acquainted with them before tuning in.

You'll find our coverage of earlier OxyCons and other major industry events here.

Organizational Matters

Oxylabs stayed true to its tested formula and made the conference online only. Anyone was free to attend, as long as they registered beforehand. On the day of the event, the organizer sent an email with a link and a code. It led to a lobby that included a video stream, a Slido widget for questions, and the agenda – a very standard affair.

oxycon 2025 platform
The platform for online viewers.

As a European company, Oxylabs catered mainly to this continent, in particular the British Isles. The schedule used BST as the reference timezone and occupied a timeslot between 12 and 5:30 PM. East Coast Americans were realistically able to watch it, but it was too early for the West Coast and too late for most of Asia. 

We always find this fascinating, and the year 2025 was no exception: despite opting for online-only attendance, the organizer still had a venue with hosts and a real audience. We never saw the live attendees, but they could be heard cheering and clapping. We presume these were mostly Oxylabs’ employees. 

To make attendance more exciting, Oxylabs ran several quizzes with prizes on its Discord. The server also had a conference chat where presenters could tackle the questions that didn’t make the cut on stage due to time constraints. Believe us – that was necessary, as each talk prompted a surprising number of questions. 

All in all, the event went smoothly, and it was clear that the organizers have more or less perfected this format. Our only observation is that it was short – including all the talks, panel discussions, and breaks, OxyCon took only five and a half hours in total.

Main Themes

No surprises here: the hero of this narrative was large language models. We saw them in all shapes and sizes: as parsing assistants, agents, and code generators. Zia Ahmad brought the theoretical chops, the famous Pierluigi from The Web Scraping Club shared some practical applications, while team Oxylabs demonstrated AI in their products. 

We’re sure this topic will remain on top of everyone’s minds for the foreseeable future (or until the impending burst of the AI bubble and subsequent collapse of the financial system, if you haven’t had your morning coffee yet). But who can blame them, really?

We loved that Oxylabs managed to fit in two panel discussions. The lawyers discussed large language models from their own perspective, which is always fascinating to follow. The second panel addressed another elephant in the room, which tends to be overshadowed by AI – unblocking. Both are highly recommended, but we’ll talk about them later in this recap. 

Our final note here is that OxyCon had not one, but two introductory speeches. The first was given by co-CEO of Tesonet (the company behind NordVPN) Tomas Okmanas. The second, which also took no longer than five minutes, warned about the dangers of gatekeeping and monopolizing data. But we shouldn’t let that put a cloud(flare) over our skies. Sorry, we couldn’t resist it.

The Talks

Talk 1. From Chaos to Clarity: Data Structuring in Large-Scale Scraping

Aleksandras Šulženko, Product Owner at Oxylabs, kicked off the presentations with a walk through history and a feature reveal. He recounted all of his company’s approaches to data parsing, culminating in AI-made parsers that heal themselves. 

The company’s road has been long and winding, with seven steps leading to the current implementation. They started with dedicated scrapers, dabbled in machine learning models, and even accepted manual parsing instructions, before arriving at an LLM-based approach. Aleksandras narrated the process very well, highlighting the strengths and weaknesses of each step. 

The apex approach generates selectors from plain language prompts, with an optional schema to ensure better accuracy. However, its main breakthrough is that the system can automatically notice once these static parsers break, regenerating them without manual intervention. At this point, the flow of the presentation collapsed a little (because how the heck do you demonstrate self-healing parsers?), but we still consider it a worthy watch.

oxycon 2025 talk 1
Aleksandras showing the vices and virtues of the parsing methods Oxylabs has tried.

Talk 2. Scaling E-Commerce Data Extraction: From Zero to 10 Billion Products a Day

A good one. Fred de Villamil, the self-proclaimed CTO of scale-ups, explained how his company NielsenIQ manages to run over 10,000 precisely geolocated spiders for digital shelf analytics. In a nutshell, Fred’s team helps brands like Walmart to understand how their stores perform online. 

The speaker outlined the three main challenges he faces, namely coverage, resource management, and anti-bots. He then introduced Nielsen’s strategy for building a process that scales. It involved custom anti-bot tooling, a centralized control center, robust monitoring tools, and even an academy for onboarding new people to his team of 50 web scraping specialists. 

Some facts: it takes between six and eight days to build a spider, and the hardest bot protection system to overcome is PerimeterX. You’ll find plenty more where that came from.

oxycon 2025 talk 2
Fred's employer is hoovering up data at an industrial scale.

Talk 3. Creating an AI-Powered Price Comparison Tool With Cursor and Oxylabs’ AI Studio

Another product demonstration. This time, Oxylabs’ Head of Data Rytis Ulys took the wheel to showcase his company’s new AI Studio. It includes endpoints for scraping and crawling websites, searching Google, and controlling cloud browsers – they’re meant for AI startups and bear a strong resemblance to Firecrawl. 

Rytis introduced a hypothetical scenario, where he wanted to open a bike store and needed competitive intelligence. He used Cursor, as well as AI Studio’s crawling and browser endpoints to create a scraper and build two sets of product data from competitor websites within minutes. 

The demo was pre-recorded, but it showed what the presenter wanted viewers to witness: that it’s now possible to quickly get data without building parsers, fighting with blocking mechanisms, or even knowing how to code well. The current iteration of AI Studio feels a little like a playground, removed from Oxylabs’ other services. But its utility is evident, and we’re sure the provider will figure out a way to incorporate it into the main product line-up.

oxycon 2025 talk 3
AI Studio includes AI Crawler, AI Scraper, AI Map, AI Search... and a Browser Agent.

Talk 4. The AI-Scraper Loop: How Machine Learning Improves Web Scraping (and Vice Versa)

Zia Ahman, Data Scientist at Turing, explored how AI (in the broader sense than only LLMs) and web scraping feed off one another, creating a virtuous cycle of improvement. 

The talk started off by showing how web scraping complements ML, which boiled down to stating that language models need a lot of data to work. In the second part, the speaker tried exploring web scraping through an LLM interface, with various results. He then moved on to data parsing techniques, which included computer vision, sequence models for selectors, and using multiple models at once to reach a consensus. 

Zia is an educator with many courses under his belt, so we enjoyed learning about the possible machine learning techniques for data parsing and validation. But when it came to data access, we found his arguments somewhat lacking.

oxycon 2025 talk 4
Turns out, democracy has a role even in data parsing!

Panel Discussion 1. Web Scraping and AI: Legal Touchpoints and Ways Forward

The first panel discussion had three lawyers (Mindaugas Civilka from Tegos Law Firm, Alex Reese from Farella Braun + Martel, and Kieran McCarthy from McCarthy Law Group), one VP of Engineering (Chase Richards from Corsearch), and Denas Grybauskas – also a lawyer from Oxylabs – as the moderator. The panelists have worked on some high-profile cases, such as HiQ vs. LinkedIn, so the line-up here was very strong. 

The discussion touched upon quite a few topics. For example, we learned about the main legal questions raised in web scraping, legislation involving AI and the changes it’s brought to the legal world. Much attention was given to the topic of copyright, raising the concept of copyright preemption. The panelists also spoke about how to balance the interests of AI companies and the rest of the world in general. The efforts include Cloudflare’s gatekeeping, remaking the robots.txt file, and more.

It was a brilliant choice to include lawyers representing both American and European legal systems. All in all, we highly recommend watching this panel.

oxycon 2025 panel
The three plus two panelists.

Talk 5. How AI Reshaped My Workflow As a Scraper Developer and Content Creator

The final solo presentation involved Pierluigi Vinciguerra from DataBoutique and The Web Scraping Club. He shared how LLMs helped him to automate time-consuming tasks both as a content creator and a web scraping professional. 

In particular, Pierluigi built several helper tools. One of them automatically manages the access level and permissions of paying newsletter users. The second aggregates relevant articles from sources like Reddit and Hacker News, compiling a summarized reading list. After this, Pierluigi showed his LLM-assisted scraping setup which included a blueprint with detailed instructions to ensure that the model will always adhere (to the best of its abilities) to best practices. 

Practical examples aside, Pierluigi shared some nuggets of wisdom. The main takeaway is becoming common knowledge, but it’s still worth repeating: language models are amazing for horizontal scaling. But the most striking statement was that AI wrote over 90% of his code last year. We enjoyed watching and recommend this talk.

oxycon 2025 talk 5
When LLMs dream of electric sheep, Anthropic's CEO dreams of Pierluigi.

Panel Discussion 2. Advanced Web Scraping: Techniques to Stay Unblocked

The second panel included Ieva Šataitė from Oxylabs, Juan Riaza Montes from Idealista, Hocine Amrane from Nielsen IQ, and Tadas Gedgaudas, ex-Oxylabs who left to found topYappers. The discussion was moderated by Juras Juršėnas, COO at Oxylabs. We’ll say outright that it’s one of the must-watches of the conference. 

The panelists started by sharing what changed in a year. Of course, the big topic was Google cracking down on web scraping. But in general, unblocking has become harder and now requires understanding deep tech. Anti-bot solutions have become a big business and, as the guys from Nielsen love to say, what took two days to unblock now can take two weeks. 

On the upside, there’s a lot of activity in open source tools, which are good for up to 90% of use cases. The key is to have a system where you can quickly plug in and test a tool. However, most agreed that it makes no sense to bang your head against the wall – at some point, the better option is to outsource. 

As in the previous panel, Cloudflare was on top of everyone’s minds, and it was evident that the incentive system of the web was changing. The panelists shared their other fears, such as new fingerprinting methods like JA4, the increasing resources required to find unblocking techniques, and the possible need to use real devices to scrape.

The discussion addressed many smaller questions: for example, if DataDome is the hardest anti-bot to defeat or if Asian e-commerce stores really serve more fake data than other continents. All in all, despite their concerns, the panelists remained optimistic about the future.

oxycon 2025 panel
Unblocking websites is no joke, but there's no need to take things too seriously.

Bottom Line

That was 2025’s OxyCon. We learned a lot, and hopefully, so have you! Go watch the talks while we wait for the second edition of Zyte’s Extract Summit.

The post OxyCon 2025: A Recap appeared first on Proxyway.

]]>
https://proxyway.com/news/oxycon-2025-recap/feed 0
Zyte Extract Summit 2025 (Austin): A Recap https://proxyway.com/news/zyte-extract-summit-2025-austin-recap https://proxyway.com/news/zyte-extract-summit-2025-austin-recap#comments Thu, 02 Oct 2025 05:16:31 +0000 https://proxyway.com/?post_type=news&p=38304 Our virtual impressions from the first edition of Zyte’s annual web scraping conference.

The post Zyte Extract Summit 2025 (Austin): A Recap appeared first on Proxyway.

]]>

News

Our virtual impressions from the first edition of Zyte’s annual web scraping conference.

Adam Dubois
extract summit 2025

Extract Summit is one of the two yearly events dedicated to web scraping, the other being OxyCon. For the first time, the conference spanned two continents: North America and Europe. 

This recap covers the US part which took place in Austin at the end of September. Zyte has made the talks freely available on YouTube, so you can use this article to quickly learn about them before committing. 

The Dublin edition is set for early November. We plan to cover it, as well.

Organizational Matters

After flip flopping between Dublin and Austin (in 2024, the venue was in Austin), Zyte decided to simply cover both locations. This spelled great news for audiences that invariably suffered due to time differences. Being located in Europe, we know this pain all too well. 

The Austin edition concluded over two days. The first day had five technical workshops run by Zyte. Day 2, dubbed the Main Event, featured ten presentations. Virtual attendance was free, but it only included day two. Live tickets cost several hundred dollars for both days; the sum was meant to cover access to the workshops, the venue – and, of course, tacos.

Once again, being geographically challenged, we were unable to watch the talks live. But Zyte was gracious enough to give us access to the recordings shortly after. Live viewers had Vimeo for the stream and Slido right beside it to ask any questions that arose. 

Curiously, there were no panel discussions this year – usually, organizers try to include at least one. And, maybe owing to time constraints, the presenters took very few questions after their talks, often just one or two. 

The third thing we noticed was how many industry insiders there were. Aside from Zyte’s staff, we counted five web scraping infrastructure providers and only one company that offers a service based on the data they process (without even scraping it!).

zyte extract summit 2025 interface
The platform for online viewers.

Main Themes

Very much expectedly, the conference revolved around large language models. However, the topic didn’t feel overwhelming, as Zyte struck a good balance sprinkling in flavor presentations. By flavor, we mean case studies or know-how specific to the speaker’s line of business, such as Ovidius’ war stories from working in an IP sourcing company. 

The talks didn’t single out data processing which, in addition to natural language input, is arguably AI’s main strength in our niche. We also learned about generating spiders through the use of LLMs and AI agents. 

Little attention was given to unblocking – come to think of it, aside from Julien’s woes with scraping Google, the topic was omitted altogether. Maybe companies are less willing to share their secret sauce as the stakes grow, which is a broader trend we’ve noticed over the past year. 

The overarching vibe (excuse our Gen-Z) was that many exciting things are coming along, but nothing’s been decided yet – and that there are plenty of opportunities to capitalize on. Pretty inspiring, if you ask us!

The Talks

Talk 1. How to Make AI Coding Work for Enterprise Web Scraping

A product demo from the get-go! Zyte brought two heavy hitters, Ian Lennon (CPO) and John Rooney (Dev Engagement Manager) on stage to showcase what the company has been cooking this year. 

Without beating around the bush, it’s a VS Code extension called Web Scraping Copilot. The tool’s main purpose is to help developers build Scrapy spiders faster by writing objects, fixtures, and other code needed to scrape websites. It achieves this by coupling GitHub’s Copilot and Zyte’s MCP server. 

The presentation had two parts. First, John fired up VS Code and promptly built a spider on stage, demonstrating how to fetch and structure several product pages. Ian then took over and gave a broader perspective from the business point of view. 

The gist was that instead of making solutions, Zyte aims to create components to help engineers do web scraping well. This is all done with enterprise requirements in mind, in particular determinism, modularity, and ownership of code. 

What’s interesting is that you don’t even need to buy Zyte’s API for the extension to work – it accepts any proxy or unblocking tool. The extension itself is free for now, but you may want to get a paid version of GitHub’s Copilot to avoid restrictions.

extract summit 2025 austin talk 1
Straight out of the oven.

Talk 2. How to Make AI Coding Work for Enterprise Web Scraping

In the first presentation, Ian mentioned an autonomy scale where AI tools move from assistance towards agency as they progress. Zyte’s Senior Data Scientist Ivan Sanchez took this idea and fleshed it out in the context of AI agents for web scraping. 

The first part covered various types of AI agents, drumming up hype with quotes about their adoption. Ian then took viewers back to reality: in their current shape, AI agents kind of suck for web scraping. He gave three slides with challenges and potential solutions before introducing Zyte’s attempt at overcoming the shortcomings. 

Wait a minute, are we talking about Web Scraping Copilot all over again? As it turns out, yes. Ian shared more context about the origins of the tool (internal project) and its innards: Copilot relies on mini-agents and MCP sampling to achieve what insular agents can’t. In the end, he teased viewers with a testimonial that claimed to have cut spider building time from eight hours to just two. Impressive!

extract summit 2025 austin talk 2
Looking at the slide, more like a gaping hole.

Talk 3. The Technical Reality of Processing 10% of Google’s Global Search Volume

In the third talk Julien Khaleghy, CEO of a major SERP API called… SerpApi, shared the trials and tribulations of scraping Google data in 2025. The takeaway is that despite spending ten times the resources, Google is now twice slower to scrape. Ouch.

What makes this search engine such a naughty target? Besides the infamous move to JavaScript dependency in February and the deprecation of more than 10 results per scrape, Julien’s team encounters: more CAPTCHAs, more diverse CAPTCHAs, more and sometimes permanent (!) IP bans, and JS challenges, among other things. 

The presentation gives a fascinating opportunity to learn how a tech giant behaves when it starts taking web scrapers seriously. As a bonus, Julien throws in a performant open source Ruby parsing library – because we’re in this together.

extract summit 2025 austin talk 3
Julien’s look says it all.

Talk 4. You Might Want to Reconsider Scraping with LLMs

The fourth talk really subverted our expectations. Delivered by Jerome Choo, Director of Growth at Diffbot, it spoke about the performance of large language models in data extraction. 

Why did we find the talk so subversive? Well, that’s because Diffbot has been an early adopter and major proponent of machine learning that’s not based on gen-AI. We expected Jerome to demolish LLMs, prying open their weaknesses for all to see. What we witnessed was actually an honest confirmation that AI is pretty darn good at putting data into structures. 

Throughout the talk, Jerome walked us through multiple data transformation scenarios, such as extracting news signals about M&As or getting the required information from data processing agreements. The presenter compared various language models and gave useful tips which culminated in this nugget of wisdom: write schemas, not rules.

extract summit 2025 austin talk 4
Jerome swears to tell the truth, the whole truth, and nothing but the truth.

Talk 5. Do You Really Need a Browser? Rethinking Web Scraping at Scale

Another contrarian presentation – but this time, without a twist. Sarah McKenna from Sequentum, a serial presenter at Zyte’s events, challenged the prevailing tendency to run everything through a web browser.

Sarah’s response was mainly prompted by the rise of AI agents and their reliance on browsers. We have Perplexity’s Comet browser, as well as investments into cloud infrastructure like Browserbase and Browser-Use. However, hype is one thing, and reality is another. Sarah cited works revealing the limitations of LLMs and reminded everyone just how costly and brittle browser-based scraping is. 

In-house, Sequentum behaves like any sane (read: bootstrapped) web scraper does: it fires up browsers only when forced to, otherwise extracting necessary identifiers and turning to a lightweight HTTP library. Sarah also spoke about Cloudflare’s gatekeeping efforts, battles over standards, and more, concluding that “the browser opportunity” is still wide open for grabs. 

Unfortunately, the slides weren’t formatted properly. But it was still an interesting talk to follow.

extract summit 2025 austin talk 5
You better believe it.

Talk 6. Web Scraping as Social Practice: Balancing Ethics and Efficiency in a Data-Hungry World

Rodrigo Silva Ferreira, QA Engineer at Posit, gave a presentation about collecting data responsibly. 

Rodrigo Silva isn’t a professional or even habitual web scraper, so his talk was naive at times and often sounded more like a school project. However, the speaker’s sincerity and description of his socially-oriented personal projects left us all the better for having watched it. 

The most valuable takeaway for us was that scraping is never just technical, which we sometimes tend to forget. It can have a big impact not only for those doing the scraping, but also the destination, and the people or communities whose data we collect.

extract summit 2025 austin talk 6
Web scraping can be seen as a negotiation between sometimes conflicting goals.

Talk 7. Balancing Innovation and Regulation in Data Scraping

Another serial speaker at extract summits, Zyte’s Chief Legal Officer Sanaea Daruwalla, brought viewers up to date with the latest legal developments in web scraping and artificial intelligence. Considering that all we do is scrape data and talk about AI, this one is a must.

To keep this sprawling and complex topic digestible, Sanaea took a brilliant concept of scales, putting innovation on one end and regulation on the other. She then tackled four pertinent topics: public web data, copyright in AI, and the use of personal data. 

Compared to 2024, the scales tipped strongly toward innovation, but only when it came to scraping public data. The other cases are much less straightforward. Some of the takeaways were that you shouldn’t collect pirated content, and that the EU takes personal information very seriously.

extract summit 2025 austin talk 7
Sanaea discussed the balance of innovation and regulation in the most contentious areas of web scraping.

Talk 8. Building Blocks of a Web‑Scraping Business

Victor Bolu is responsible for ensuring the profitability of his business, Webautomation, and he came on stage to talk about it. To be more precise, he brought a generalized plan for small web scraping businesses, together with ideas for bringing margins closer to a typical SaaS business.

Victor whipped charts and numbers; he broke down the costs of goods, spoke about LTVs, CACs, and other terms found in the books of business management. He gave two case studies, showing why more revenue may not result in profit. 

Victor even concocted a three-step margin improvement strategy that revolved around cutting proxy costs, automating support, and pushing upsells with AI. Some of the advice was a little hand-wavy (such as building models that auto-adjust to bot changes), but the talk was delivered from a business and not a technical point of view. This one’s optional.

extract summit 2025 austin talk 8
Victor’s three-step plan to financial success.

Talk 9. 99 Problems but a /24 Ain’t One (Except When It Is)

That’s one brain twister of a title. Ovidiu Dragusin from Servers Factory described the daily challenges of an IP broker – or, as he cheekily called them, war stories. We saw Ovidiu last year as part of a panel; however, he really shone having the stage all to himself.

Compared to some other proxy-oriented talks we’ve seen, this one wasn’t heavy on content. (In fact, we probably learned more during the brief QA session.) The speaker opted to share three anecdotes concerning SLAs, disappearing suppliers, and miscommunication with new IP sources. The overarching message was that chaos is the status quo, and that these crazy people wouldn’t have it any other way. 

Ovidiu came to entertain and maybe make viewers emphatize with IP brokers. He succeeded. 

extract summit 2025 austin talk 9
What clients want isn’t always what they get – but there are good reasons why.

Talk 10. Data-Quality Framework for User-Submitted Financial Documents

Egor Panlov from Truv closed the conference by delivering a talk about extracting information from financial documents. It’s interesting that his company doesn’t even scrape the web; regardless, data parsing is one of the major problem areas in our field.

Egor began by introducing income verification documents (like tax statements or pay stubs) and the challenges they bring. These are usually missing or inconsistent records and different document formats. He then walked us through the company’s verification system, showing how they normalize fields, validate data, and make sure that nothing is inaccurate or tempered with. We’re talking about people’s money, after all!

Large language models played a role here as well, naturally within strict guardrails. In fact, they’ve replaced OCR models for something like photos. Egor’s presentation actually received the most questions out of all, maybe due to fewer time constraints. However, we counted over 40 slides, many filled with tables and formulas; so, the talk was more suitable for watching on demand than live. We recommend doing so.

extract summit 2025 austin talk 10
Egor’s data validation system includes many checks to avoid messing with people’s money.

Bottom Line

That was the first edition of Zyte’s 2025 Web Data Extract Summit. If any of the summaries tickled your fancy, the full recordings are available on YouTube. Thanks for reading!

The post Zyte Extract Summit 2025 (Austin): A Recap appeared first on Proxyway.

]]>
https://proxyway.com/news/zyte-extract-summit-2025-austin-recap/feed 1
New Review: Ping Proxies https://proxyway.com/news/new-review-ping-proxies https://proxyway.com/news/new-review-ping-proxies#respond Wed, 17 Sep 2025 10:16:04 +0000 https://proxyway.com/?post_type=news&p=38006 Ping Proxies joins the ranks of our reviewed providers.

The post New Review: Ping Proxies appeared first on Proxyway.

]]>

News

Ping Proxies joins the ranks of our reviewed providers.

Adam Dubois
ping proxies review
Ping Proxies is a former sneaker proxy provider that changed course after the niche crashed. These companies usually follow a simple playbook: touch up the marketing, maybe adjust the rates, and carry on. 
 
But Ping chose a different path: to build a proper platform that could compete with the best. Its residential proxies, management tools, and unique features like the Smartpath optimization engine impressed us enough to give the provider a solid score of 8.7. 
 
You’ll find the full review here

The post New Review: Ping Proxies appeared first on Proxyway.

]]>
https://proxyway.com/news/new-review-ping-proxies/feed 0
New Review: Proxyrack https://proxyway.com/news/new-review-proxyrack https://proxyway.com/news/new-review-proxyrack#respond Thu, 11 Sep 2025 09:37:28 +0000 https://proxyway.com/?post_type=news&p=37910 Proxyrack joins the ranks of our reviewed providers.

The post New Review: Proxyrack appeared first on Proxyway.

]]>

News

Proxyrack joins the ranks of our reviewed providers.

Adam Dubois
proxyrack review news

Proxyrack is one of the original sellers of unlimited residential proxies. They remain the provider’s forte, together with a whole host of features and a truly flexible platform.

However, while strong in the US and Europe, Proxyrack’s performance wasn’t always up to par. As a result, we decided to give it a score of 8.2.

Read the full review here.

The post New Review: Proxyrack appeared first on Proxyway.

]]>
https://proxyway.com/news/new-review-proxyrack/feed 0
Oxylabs Invalidates Four of Bright Data’s Patents in Court https://proxyway.com/news/oxylabs-invalidates-four-bright-data-patents-in-court https://proxyway.com/news/oxylabs-invalidates-four-bright-data-patents-in-court#respond Thu, 14 Aug 2025 10:17:05 +0000 https://proxyway.com/?post_type=news&p=37072 Bright Data’s grip on residential proxy technology is slipping.

The post Oxylabs Invalidates Four of Bright Data’s Patents in Court appeared first on Proxyway.

]]>

News

Bright Data’s grip on residential proxy technology is slipping.

Adam Dubois

On August 1, the U.S. Court of Appeals confirmed the lack of validity for four of Bright Data’s patents: no. 10,257,319; 10,484,510; 11,044,342 and 11,044,344. 

The Israeli company used these patents as a basis for its residential proxy server technology – and as a bludgeon to challenge other providers in court.

The Court’s decision upholds an earlier decision made by the U.S. Patent Office. It came as part of a sprawling dispute between Oxylabs and Bright Data and invalidated Bright Data’s patents based on their obviousness and appearance in prior art. 

Throughout the years, Bright Data had sued multiple companies over matters relating directly to or involving its patents, such as NetNut, Oxylabs, and BiScience (GeoSurf). It had also intimidated competitors like SOAX against renting proxy servers in Texas, which Bright Data uses as the venue for litigation.

Oxylabs had the following comment:

We welcome the Court’s recent decision to invalidate Bright Data patents, including the two patents previously used in litigation against Oxylabs over our residential proxy technology. In our view, this outcome strongly supports the position we’ve maintained for years.

Although the process was lengthy, it reaffirms our belief that the legal system ultimately ensures fair decisions – benefiting not only Oxylabs but the entire market.

We’re grateful to our legal team, advisors, and colleagues for their dedication. At Oxylabs, we remain committed to fair competition, technological innovation, and defending both.

Bright Data preferred not to comment. 

While significant, this development doesn’t conclude the legal battles in the proxy server market. However, it does allow providers to breathe a little easier. 

We believe that ideas should be rewarded – as the proxy market grows mature, patents may serve that purpose. However, patents must be carefully crafted to protect genuinely innovative ideas rather than be weaponized to stifle competition or exclude legitimate market participants.

The post Oxylabs Invalidates Four of Bright Data’s Patents in Court appeared first on Proxyway.

]]>
https://proxyway.com/news/oxylabs-invalidates-four-bright-data-patents-in-court/feed 0
Microsoft Retires Bing Search APIs Today https://proxyway.com/news/microsoft-retires-bing-search-apis-today https://proxyway.com/news/microsoft-retires-bing-search-apis-today#respond Mon, 11 Aug 2025 12:43:38 +0000 https://proxyway.com/?post_type=news&p=36789 From now on, Bing data will be available through Microsoft’s AI agents.

The post Microsoft Retires Bing Search APIs Today appeared first on Proxyway.

]]>

News

From now on, Bing data will be available through Microsoft’s AI agents.

Adam Dubois
microsoft retires bing apis

After just three months since its first announcement, Microsoft is shutting down Bing Search APIs.

Access will be disabled for all but the largest existing customers, such as DuckDuckGo. 

Microsoft guides other customers, together with new users, to consider its Grounding with Bing Search feature. It returns Bing data as part of chatbot responses, effectively serving as an engine for retrieval-augmented generation (RAG). 

Bing APIs were used by developers to programmatically access web, image, and other results from Bing Search. 

Developers who need raw data will have to resort to unofficial third-party search engine APIs.

The post Microsoft Retires Bing Search APIs Today appeared first on Proxyway.

]]>
https://proxyway.com/news/microsoft-retires-bing-search-apis-today/feed 0
Oxylabs Reveals 2025 OxyCon’s Agenda https://proxyway.com/news/oxylabs-reveals-2025-oxycon-agenda https://proxyway.com/news/oxylabs-reveals-2025-oxycon-agenda#comments Wed, 06 Aug 2025 10:20:16 +0000 https://proxyway.com/?post_type=news&p=36518 The conference will feature five presentations and two panel discussions.

The post Oxylabs Reveals 2025 OxyCon’s Agenda appeared first on Proxyway.

]]>

News

The conference will feature five presentations and two panel discussions.

Adam Dubois
oxycon 2025 main image

Oxylabs, the Lithuanian provider of web scraping infrastructure, has announced the agenda for OxyCon 2025, its annual conference on web data collection. 

The conference will take place online on October 1st. Participation is free of charge. 

This year, OxyCon’s line-up comprises five presentations and two panel discussions. 

  • The presentations will teach how to structure data at scale, grow e-commerce data extraction to billions of products per day, build a price comparison tool with Oxylabs’ new AI studio, improve web scraping with machine learning, and more. 
  • The panels will discuss the legal aspects surrounding web scraping and AI, as well as advanced web scraping techniques to stay unblocked.

Aside from Oxylabs’ in-house team, the list of participants includes companies like NielsenIQ, Google, Idealista, and leading legal law firms in the field. 

You can register for OxyCon on its designated page. If you want to learn more before committing, we covered the last three conferences in detail. 

OxyCon will be one of the two major web scraping-related conferences this year. The second, Zyte’s Extract Summit, will take place in late September (Austin) and early November (Dublin).

The post Oxylabs Reveals 2025 OxyCon’s Agenda appeared first on Proxyway.

]]>
https://proxyway.com/news/oxylabs-reveals-2025-oxycon-agenda/feed 1
Kill Your Product – Why Sacrificing Your Cash Cow Can Be the Path to Growth https://proxyway.com/news/kill-your-product https://proxyway.com/news/kill-your-product#respond Wed, 23 Jul 2025 09:15:05 +0000 https://proxyway.com/?post_type=news&p=36220 By Shane Evans, CEO of Zyte

The post Kill Your Product – Why Sacrificing Your Cash Cow Can Be the Path to Growth appeared first on Proxyway.

]]>

News

An article by Shane Evans, CEO of Zyte.

Shane Evans

In the tech industry these days, funerals for software are all too familiar and the graveyard of discontinued products is ever-growing. Whether it arises through company failure, M&A or market shifts, the decision to sunset software can evoke sadness, embarrassment, fear and resentment.

But killing your software can be a path to success. In fact, sunsetting your biggest product at its peak could be the move that unlocks a brighter future.

That’s what I did when I shocked my team by announcing we would deprecate the product accounting for 60% of our revenue. Here is what I learned, and why I think this bold move is sometimes necessary.

Piece-by-Piece Proliferation

All software is the story of laddering waves of new capability at the intersection of problem and opportunity.

My web scraping journey started in 2007 when I wrote Scrapy, a web scraping framework, to support extracting data from e-commerce websites. Within two years, my team used it to gather data from 4,000 websites.

However, additional challenges arose over time. When websites started blocking access, I wrote Smart Proxy Manager to route requests through a large list of IPs, manage them, and avoid getting blocked. Further capabilities were added as separate products, such as a residential proxy offering and the Smart Browser for large-scale browser rendering needs.

But the trouble with incrementalism is that one day, you wake up and realise your offering is really a smorgasbord of cumbersome point solutions.

Complexity Creeps Up

Servicing a suite of tools drains an increasing amount of time. As the task of modern web scraping grew in complexity, demanding several different approaches, we shipped products for each. But our stack became so complex that even our expert users lost time deciding on the optimal solution or responding to website changes.

Smart people, skilled at assembling pieces of a tech stack from disparate sources, won’t always complain to you about this sort of friction, because solving puzzles with competence is their job; technical challenges are business-as-usual. 

Moreover, this proliferation of products was considered good practice at the time – every other vendor was rapidly adding more products, often with overlapping use cases.

However, when providing customers with a collection of isolated tools, many remain oblivious to the full range of options available. They often don’t realise when they have made a sub-optimal choice, and can fail to recognise possibilities beyond their immediate needs.

Rip It Up

The answer to our problem lay in combining our offerings in a single API that could address the whole web scraping stack, making optimal use of the infrastructure and avoiding the need for users to manage all that complexity.

But sometimes people become accustomed to the status quo. Product managers assumed the new API would be an add-on to our primary product, Smart Proxy Manager, because they could only perceive iteration through our existing product offering.

So, when I said, “Guys, we’re killing these products,” people were shocked. I announced that, in a couple of years, we wouldn’t be selling the standalone products anymore – instead, we would build a single, brand-new product, an all-in-one web scraping API, called Zyte API.

I don’t mind admitting, the team thought I’d gone crazy; people were unhappy. A year after the switch, however, we have seen a 15% increase in revenue from migrated users. Even though the new product is cheaper on average for the same workload, usage is up considerably as it can be used on a broader range of tasks.

Sunsetting Is Success

So, don’t mourn for deprecated software. Sunsetting a product can indicate a mature software category experiencing strong growth, momentum that has driven a creative explosion of diverse solutions, which now need to be rationalised.

A company’s willingness to kill a product shows that it is evolving fast enough to outpace its previous offerings, transforming standalone features into a larger, more ambitious vision.

You can expect to see a lot more software being sacrificed in the near future. AI is such a step change that it will prompt a fundamental rethink of many products, including in the web scraping field, where large language models will transform the ability to parse unstructured data.

A Funeral for Your Flagship

If you are coming around to the value of bidding goodbye to your main product, what are the main considerations?

1. Build Internal Buy-In

Your team may provide the most resistance. After all, staff are wedded to and care deeply about what’s on their plate right now.

Your job is to build confidence in your vision for the future. Build a coalition of internal support by showing how the medium and long-term benefits represent a bigger prize. I had to communicate the vision clearly, demonstrating how an integrated API solution would ultimately save us time, reduce costs, and improve the customer experience.

Unfortunately, you won’t always convince everybody, and you must still proceed despite some opposition.

2. Reallocate Resources Meaningfully

Stopping doing something frees up the resources to do something else. This is the fuel that gives your future room to grow. Embracing that opportunity means deciding to stop actively developing the outgoing product.

Had we not actively stopped new feature development on Smart Proxy Manager, staff would not have taken Zyte API seriously. This goes for sales as well as product teams – we had to stop selling our older product.

3. Take Users on the Journey

Explain the rationale behind the change and highlight the new product’s value proposition. You need to get customers to see the benefits on the other side of the hill. Although in our case, the new product was cheaper on average, customers will often have concerns and questions about pricing.

4. Build a Bridge to the Future

But it’s not just about communication. Technical customers’ anxiety about product deprecation is real and understandable because no one wants to be forced to write new code for something that works perfectly well. Offering backwards compatibility, as Zyte API did, can massively minimise disruption to users. A commitment to continuing to support critical enterprise customers will always go a long way to guaranteeing continuity.

Kill or Be Killed

Letting go of the past is the best way to embrace the future. Retiring a flagship product isn’t a sign of failure; it’s a commitment to innovation.

As we enter this new era of disruption, I wonder if companies will be willing to disrupt themselves before it’s too late.

The post Kill Your Product – Why Sacrificing Your Cash Cow Can Be the Path to Growth appeared first on Proxyway.

]]>
https://proxyway.com/news/kill-your-product/feed 0
New Review: HypeProxies https://proxyway.com/news/new-review-hypeproxies https://proxyway.com/news/new-review-hypeproxies#respond Mon, 21 Jul 2025 11:04:54 +0000 https://proxyway.com/?post_type=news&p=36175 HypeProxies joins the ranks of our reviewed providers.

The post New Review: HypeProxies appeared first on Proxyway.

]]>

News

HypeProxies joins the ranks of our reviewed providers.

Adam Dubois
hypeproxies review news

HypeProxies is a specialized provider from the US. It sells proxies for limited releases, but they’re also compatible with a variety of other tasks.

We had the chance to test the provider’s ISP proxy servers. While limited in features and available only in the US, they had immaculate performance – and pricing that we consider very fair. 

After weighing the pros and cons, we decided to give HypeProxies a score of 7.9. You can read the full review here: https://proxyway.com/reviews/hypeproxies-review

The post New Review: HypeProxies appeared first on Proxyway.

]]>
https://proxyway.com/news/new-review-hypeproxies/feed 0
New Review: ProxyEmpire https://proxyway.com/news/new-review-proxyempire https://proxyway.com/news/new-review-proxyempire#respond Mon, 14 Jul 2025 10:16:42 +0000 https://proxyway.com/?post_type=news&p=35992 ProxyEmpire joins the ranks of our reviewed providers.

The post New Review: ProxyEmpire appeared first on Proxyway.

]]>

News

ProxyEmpire joins the ranks of our reviewed providers.

Adam Dubois
proxyempire news main image

ProxyEmpire is a proxy provider from Bulgaria. It offers all types of proxy servers, serving primarily entry-level to mid-sized customers. One of the key selling points is traffic that never expires.

We tested ProxyEmpire’s residential and mobile proxy pools. We found both above average in size and supporting precise location filters. On the other hand, they weren’t the fastest around and sometimes cost more than premium alternatives. The platform covered basic functionality well and included neat quality-of-life features. 

All things considered, we decided to give ProxyEmpire eight points out of 10. You can read the full review here: https://proxyway.com/reviews/proxyempire-proxies

The post New Review: ProxyEmpire appeared first on Proxyway.

]]>
https://proxyway.com/news/new-review-proxyempire/feed 0
Bright Data Launches a Lineup of Tools for AI https://proxyway.com/news/bright-data-launches-a-lineup-of-tools-for-ai https://proxyway.com/news/bright-data-launches-a-lineup-of-tools-for-ai#respond Thu, 03 Jul 2025 08:34:22 +0000 https://proxyway.com/?post_type=news&p=35725 Deep Lookup transforms search queries into datasets, while Browser.AI and MCP server enable AI to roam the internet uninterrupted.

The post Bright Data Launches a Lineup of Tools for AI appeared first on Proxyway.

]]>

News

Deep Lookup transforms search queries into datasets, while Browser.AI and MCP server enable AI to roam the internet uninterrupted.

Adam Dubois

Bright Data, the Israeli provider of web scraping infrastructure and services, has introduced a lineup of AI-based and AI-oriented tools.

  1. Deep Lookup allows anyone to generate datasets without code, using the whole internet as a source. 
  2. Browser.ai aims to serve the booming industry of AI agents by giving them access to stealth web browsers.
  3. Bright Data’s MCP server allows LLMs and other apps to access web resources through a standardized protocol. 

Deep Lookup is currently in beta, accessible for companies to try out. The other two are available for all customers.

Deep Lookup

Introduced on July 2, Bright Data’s newest product aims to become an insight generation engine for non-technical teams. It accepts plain language queries and produces structured datasets, using the whole internet as the source.

bright data deep lookup interface
Deep Lookup's basic interface is simple. Source: brightdata.com

Deep Lookup uses Bright Data’s archive of over 200 billion web pages, which is expected to reach 500 billion by next year, as well as live retrieval capabilities. 

In the end, users get a structured table of data points. They’re free to include more columns (either enriching the dataset or defining conditions to be met) at any point.

deep lookup example
A lookup in progress. Source: Deep Lookup launch event

Compared to similar deep research tools, Deep Lookup has the benefit of speed and structure. In addition, it transparently shows the sources and reasoning behind the results.

deep lookup reasoning
Here’s why this particular company got included. Source: Deep Lookup launch event

Bright Data lists a wide range of use cases for its tool. For example, financial analysts can use it for market mapping, corporate strategists can screen targets for M&A, while B2B sales teams can look for leads.

Browser.ai

Browser.ai is Bright Data’s new brand that offers serverless web browsers. Its target audience is AI agents needing to access websites without blocks and other disruptions.

browser.ai integration
Source: Browser.ai

Browser.ai relies on the provider’s unblocking infrastructure: it integrates proxies, CAPTCHA solving capabilities, and cookie management, among other things. 

Furthermore, the tool accepts plain language prompts, which should prevent scripts from breaking upon website changes. 

Browser.ai’s plans start from $39 per month for 10 GB of data. There’s also a free plan with two gigabytes included.

MCP Server

Bright Data’s MCP server gives AI tools a standard interface for interacting with the provider’s infrastructure. For example, Claude can use it to run Google searches without encountering CAPTCHAs.

Bright Data’s list of MCP tools include a general-purpose web scraper, browser interactions, as well as a collection of specialized scrapers covering search engines and other popular targets.

The server integrates with most AI tools like Claude, Cursor, and LangChain.

The Bottom Line

By now, Bright Data has fully embraced AI as a way forward. 

Deep Lookup is probably the most interesting of the three. It builds upon all of Bright Data’s infrastructure layers to challenge services like ChatGPT.  

While the protocol is still fresh out of the oven, the MCP server makes a lot of sense, especially knowing that our industry stands at the frontier of AI developments. 

Browser.ai actually repackages Bright Data’s existing Scraping Browser product, likely cutting some corners to reduce price. As recent investments into Browserbase ($27M) and Browser-Use ($17M) show, there’s much perceived demand for this product category. Admittedly, Bright Data also managed to snatch a perfect domain name.

The post Bright Data Launches a Lineup of Tools for AI appeared first on Proxyway.

]]>
https://proxyway.com/news/bright-data-launches-a-lineup-of-tools-for-ai/feed 0