Work Location
Austin, Texas
Company
Confidential
Type
Contract
Category
IT
Job Duration
6 Months
Skills
Amazon web services (AWS), Application Development, Java, Java Script, JSON (JavaScript Object Notation), NoSQL Database, Software Development Lifecycle


Job Description

Overview: 

TekWissen Group is a workforce management provider throughout the USA and many other countries in the world. Our Client is an online travel company, enabling users to access a wide range of services. The Company offers airline ticket bookings, hotel reservations, car rentals, cruises, vacation packages and various attractions and services via the world wide web and telephone travel agents. It owns and operates several international global online travel brands, primarily travel fare aggregator websites and travel metasearch engines.  

 

Title: Software Dev Engineer II 

Duration: 6 months 

Work Location: Austin Texas 

 

Job Description: 

 

 

  • Are you a Software Engineer who wants to work for a strong and creative online travel technology company?  
  • Do you have a passion for creating platforms turning data into meaningful information with help of machine learning platforms? 
  • Do you want to work with the latest ML technologies in the Search and Discovery ecosystem? If any of those are true…Client is looking for YOU! 
  • Search and Discovery Insights team is looking for a thoughtful and dedicated Software Engineer building out the learning systems for the Search Platform. 
  • We work closely with Search and Discovery Data Scientists in a fast-paced Agile environment to create and productionize algorithms and ML models that provide various insights to the Search Platform. 
     
  • This role provides a unique opportunity to create innovations and ground-breaking change using cloud, big data and machine learning technologies.  
  • You will be able to incubate and experiment newer technologies and platforms to demonstrate value of machine learning and enable data consumptions at scale with agility.  
  • Bring passion and dedication to your job and there's no telling what you could accomplish. 
     
     

What you’ll do: 
 

  • Challenges you will work on: 
  • Design and develop business-critical backend systems using stream processors and high-quality data pipelines. 
  • Work in a cross-functional team of Machine Learning engineers and Data scientists to design and code large scale batch and real-time data pipelines on the AWS. 
  • Assemble large, complex data sets that meet functional / non-functional business requirements. 
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. 
  • Build a cloud-native, real-time stream processing & data lake platform that scales into the Zettabytes and beyond. 
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. 
  • Perform code reviews, and lead by example on code refactoring for readability, extensibility, and testability 
  • Lead your products, with a focus on DevOps and robust automation 
  • Perform root cause analysis on external and internal processes and data to identify opportunities for improvement and answer questions 
  • Build processes that support data transformation, workload management, data structures, dependency and metadata 
  • Develop AutoML infrastructure for model selection and hyperparameter tuning. 
  • Adopt change, always open to replacing what you built yesterday with something better today. 
     
     

Who you are: 
 

Desired development skills: 
 

  • We are looking for a candidate with 5+ years of experience in a Software Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information  
  • Systems or another quantitative field. They should also have experience using the following software/tools: 
  • Experience with big data tools: Hadoop, Spark, Kafka, etc. 
  • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. 
  • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. 
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift 
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc. 
  • Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc. 
  • Experience with microservice architecture, and design. 
  • Experience on machine learning toolkits like spark mllib, H20, scikit-learn, R and ML techniques. 
  • Strong command with machine learning libraries such as Py Torch and TensorFlow and knowledge of common integration patterns for serving inference with them. 
  • Proven track record of building and optimizing data sets, 'big data' data pipelines and architectures 
  • Excellent problem solving and analytic skills associated with working on unstructured datasets 
  • Experienced in delivering real data feedback loops and streams for supporting highly available and scalable solutions with large transaction volume on a 24x7 operational cycle 
    Experience in communicating with users, other technical teams, and senior management to collect requirements, describe software product features and review technical designs 
    As a highly competent engineer, you will develop scalable, highly available and highly reliable, performant and resilient Tier-1 micro-services with high throughput in the tens of thousands of requests per second with low latency.  
  • Many of these systems are machine learning driven and developing solutions with real-time data, cached data, and data at rest will allow for us to meet the demanding SLA’s required and work closely with data scientist to optimize our solutions.  
  • You will be developing core capabilities that enable features and APIs that will drive our business through real-time feedback loops.  
  • All of these solutions will be designed for and built in the cloud. Other responsibilities will include ownership of continuous integration and deployment (CI/CD) while utilizing Agile methodologies. 
  • As part of our continuous learning culture, you will have the opportunity to participate in exciting global company hackathons and various career building courses with co-workers of all levels of expertise. 
  • Scale our services to more than 60k requests per second 
  • Develop fast, scalable, highly available, and reliable machine learning solutions with real-time feedback loops 
  • Develop new capabilities to our learning systems that will enable new features and improve experiences 
  • Develop a smart service layer that will allow for in-session personalization based off user behavior and data 
  • Work with brand stakeholders on a scalable integration pattern to access these core systems 
  • Technologies that will be used: Java, Kotlin, Sprint Boot, Kafka, Kubernetes, gRPC, Elasticsearch, Cassandra, Redis, S3, AWS, Docker, Splunk, Data Dog and latest opensource technologies 
     

Preferred Experience: 
 

a) Experience designing real-time low latency solutions with Elasticsearch or equivalent search technologies 

 
b) Experience operationalizing machine learning models in with high throughput and low latency. 
 
C) distributed computing, micro services, streaming technologies to be added. 

 

 
TekWissen® Group is an equal opportunity/affirmative action Employer (m/f/d/v) supporting workforce diversity.