Be a NutaMIND

Apply Now

Senior Data Engineer

Job Description

The role entails building data processing frameworks that handle the business’s growing database and maintaining optimized and highly available data pipelines that facilitate deeper analysis and reporting by the Data and Analytics department.
NutaNXT Technologies offers a compelling and rewarding work environment. We offer market competitive salaries, bonus, equity, benefits, meaningful growth, and development opportunities and a casual yet technically challenging work environment. Join our dynamic, entrepreneurial team and become part of our continuing success.


  • Implement real-time data ingestion and processing solutions.
  • Design, build and operationalize large scale enterprise data solutions and applications using one or more of Big data and AWS data and analytics services in combination with third parties tools like Python, PySpark, EMR, RedShift, Kinesis, Lambda, Glue, Amazon S3, AWS IAM, Amazon Cloudwatch, Amazon CloudWatch, Hadoop/EMR, Hive, Sqoop etc.
  • Build and implement ETL pipelines, EDW or Data Lake solutions.
  • Analyze, re-architect and re-platform on-premise data warehouses to data platforms on AWS cloud using AWS or 3rd party services.
  • Translate complex business problems into scalable technical solutions.
  • Design and build production data pipelines from ingestion to consumption within a big data architecture, using Python, PySpark or Scala.
  • Collaborate with a high performing data engineering team and own the end to end solution implementation.
  • Design and implement data engineering, ingestion and curation functions on AWS cloud using AWS native or custom programming.


  • Experience as a Data Engineer with 3 to 5years of development experience in Big Data and AWS Cloud
  • 5-10 years of total work experience.
  • Bachelors Degree in Computer Science, Information Technology or other relevant fields.
  • Experience with ETL, Data integration and working with large-scale datasets.
  • Experience on disparate file formats like Parquet, AVRO.
  • SQL, Data Warehouse skills are preferable.
  • Data engineering concepts (ETL, near-/real-time streaming, data structures, metadata and workflow management).
  • Experience of code management tools (Git/GitHub).
  • Experience in any of SQL or NoSQL databases, such as Cassandra, MongoDB, and HBase.
  • Experience in batch processing / real-time implementations, using Sqoop, Kafka, Hadoop, Spark, and Hive, etc.