At Segment, we believe companies should be able to send their data wherever they want, whenever they want, with no fuss. We make this easy with a single platform that collects, stores, filters, transforms, and sends data to hundreds of business tools with the flip of a switch. More recently, we also developed the ability to let customers enrich their data in real time using computations they specify. Our goal is to make it easy to understand, extract value, and protect the integrity of data. We are creating a world where engineers spend their time working on their core product, letting us take care of the complexities of processing their customer data reliably at scale. We’re in the running to take over the entire customer data ecosystem, and we need the best people to take the market.
We’re looking for software engineering generalists, with a predominantly backend focus or interest, who are excited to build large-scale distributed systems. Our small team is providing the data infrastructure for thousands of companies and processing billions of API calls every single day. Customers entrust us with some of their most valuable data, depending on us to process and deliver it reliably and with low latency. So, getting this right counts for a lot.
What we do:
- We implement high-performance data pipelines using NSQ, Kafka, Go, Consul, Redis and DynamoDB.
- We build backend APIs using Go, NodeJS, GraphQL, Aurora.
- We write UIs using ES6, Apollo and React.
- We deploy multiple times per day using Linux, Docker and Terraform.
- We inform our decisions using real data, both on the engineering and the product side.
Who we are looking for:
- You value and enjoy being part of a cohesive and supportive team.
- You are passionate about finding elegant solutions to challenging technical problems.
- You are able to lead an engineering solution to a business problem end-to-end.
- You know when it is time to refactor, and when it’s time to ship.
Projects You Could Work On:
As a distributed systems engineer, you would focus mostly on implementing backend systems using technologies like Go, Kafka, Docker, Consul, Terraform and a variety of AWS and GCP technologies. Examples of some of the technical challenges you would solve include:
- Building highly available real time and large-scale event processing pipelines
- Designing systems which scale seamlessly according to demand
- Handling mutable shared state, with low latency update requirements, across a variety of SQL and NoSQL database platforms
- Handling operational cost while maintaining high development velocity
Occasionally, you might also have the opportunity to dig into some front-end or mobile code, should the need arise and this appeals to you.
- CS or EE degree or relevant industry or open source contributor experience.
- Great computing fundamentals and shown ability to write code that solves real problems using a statically typed programming language.
- Strong theoretical fundamentals and hands-on experience designing and implementing highly available and performant fault-tolerant distributed systems.
- Experience with implementing large-scale event processing pipelines, preferably using streaming technologies.
- Well-versed in concurrent programming.
- Familiar with good practices for deploying and testing code into a Production environment.
Segment is an equal opportunity employer. We believe that everyone should receive equal consideration and treatment. Recruitment, hiring, placements, transfers, and promotions will happen based on qualifications for the positions being filled regardless of sex, gender identity, race, religious creed, color, national origin ancestry, age, physical disability, pregnancy, mental disability, or medical condition.