TokBox uses cookies to personalize content and make our site easier for you to use. If you proceed, you accept the use of cookies.Learn More

« Back to career page

San Francisco, CA, United States

Senior Big Data EngineerApply Here

Who We Are:

  • TokBox, a leader in WebRTC technology, provides the leading cloud platform for adding live video, voice, and messaging to your web and mobile applications. We believe that integrating real-time communication into products should be simple, whether you’re developing an app for one-to-one calls, group chat, or large-scale broadcast.

Who We Are Seeking:

  • The Data Infrastructure team is seeking an engineer passionate about the flow of massive amounts of data. We currently use real-time data for platform monitoring, debugging clients and servers, and providing visualizations to our customers to help improve their applications. As we continue to expand the use of real-time data, we will need an engineer with strong experience in implementation of scalable data infrastructure and a commitment to the operation of existing and new data solutions.

Your Role at TokBox:

  • Architect and implement big data systems for TokBox by understanding our core platform.
  • Research and implement newer technologies and frameworks in order to continuously evolve the data infrastructure.
  • Build tools to allow internal and external teams to visualize and extract insights from big data platforms.
  • Drive key parts of the data mining and analysis algorithms, and use the insights derived from these in order to build out the next generation intelligent platform.
  • Maintain the current pipeline and work closely with our DevOps team.
  • Debug issues as they arise.
  • Become our data evangelist and lead conversations around how we can derive more value out of our data and enable operational excellence.
  • Work alongside multiple teams including Engineering, Business Analytics and Marketing.

Requirements Needed so You can be Successful:

  • In-depth knowledge and experience with Kafka and ELK stack.
  • Experience working on data structures and algorithms in production grade systems.
  • Professional Java development background, especially in big data.
  • Coding with either Storm or Spark.
  • Experience with AWS, especially EC2, S3 and EMR.
  • Working knowledge within a Hadoop ecosystem (HDFS, MapReduce,Hive, Presto etc.)
  • Experience with developing high performance distributed systems.
  • Excellent verbal, communication, technical leadership skills and ability to coordinate activities across multiple teams.
  • BS/MS in CS or equivalent experience.

Nice to Have, but Certainly not Necessary:

  • Experience with:
    • Hadoop
    • SQL, Hive, Flume, etc.
    • Scala or python.
    • HTTP, JSON, XML and any scripting language.
  • Interest in Machine Learning.
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Other jobs at this location:

Contact Sales