About Us

We’re a team of 500+ professionals who develop cutting-edge proxy and web data scraping solutions for thousands of the world’s best known businesses, including Fortune 500 companies.

About the Team

We run one of the most advanced and largest scraping and parsing products in the world. We serve thousands of requests per second with a very high success rate. Our scrapers and parsers are used by leading e-commerce, market intelligence, and AI industry players making the work challenging and truly global. The team is a blend of different interesting personalities from different walks of life and nationalities. Here you can find people who are experts in gaming, playing guitar, riding bicycles, and other areas. We, as a team, will support you in learning how to build your own scrapers and will share all the tips, tricks and hacks we know to ensure that you are onboard in no time.

Your Day-to-Day

  • Develop scalable scrapers.
  • Define resilient scraping strategies, unblock websites for scraping
  • Improve observability in the system.
  • Develop back-end solutions for scraping & parsing problems of various magnitudes.
  • Maintain the current system and develop new features related to scraping & parsing.

What’s in store for you:

You’ll be solving complex challenges and maintaining our own infrastructure with 60PB+ monthly data traffic. Here are its scale and maturity in numbers:

  • 6PB+ Ceph storage
  • 60PB+ monthly data traffic through our systems
  • 300k+ service requests/sec processed
  • 500k+ Kafka messages/sec streamed

Salary

  • Gross salary: 4500 - 7000 EUR/month. Keep in mind that we are open to discussing a different salary based on your skills and experience.

To support your professional growth and make you feel taken care of, we’ve put together an expansive benefit package. It covers learning, well-being, celebration, and much more — learn all about it here.

Your skills & experience:

  • Experience working with Python.
  • Understanding of computer science, including data structures, algorithms, computability and complexity.
  • Version Control skills using Git.
  • Knowledge on how to unblock websites for scraping.
  • Is able to use different scraping techniques & open-source tools to build scrapers.
  • Is comfortable with using Dev Tools.
  • Network (TLS/SSL) knowledge.
  • Worked with browser automations.
  • Knows their way around asynchronous programming.

Nice to have:

  • Web development knowledge.
  • Knows how to use CSS Selectors / XPaths for parsing.
  • Experience working with Go & C++.
  • Worked on browser source code.
  • Knowledge of any front-end framework.
  • Experience working with Pydantic, FastAPI, SQLAlchemy.
  • Has experience working with Redis, MySQL, Docker, Kubernetes, Elasticsearch, Kibana and monitoring tools like Grafana, Prometheus.
  • Experience with machine learning that is scraping domain-specific.
  • Has experience in building scalable systems.