Instagram
youtube
Facebook
10 June 10, 2025
Job Description
Job Type: Full Time Education: B.Sc/ M.Sc/ B.E/ M.E./ B.Com/ M.Com/ BBA/ MBA/B.Tech/ M.Tech/ All Graduates Skills: Python, .net, React Native, Django, Javascript, HTML, CSS, Typescript, Communication Skills, Power Bi, Numpy Pandas, Sql, machine learning, Data Analysis, Coimbatore, Data Science, Java, Adobe XD, Figma, php, wordpress, Artificial Intelligence, Excel

MLSE (Python/PySpark)

Locations: Noida, Uttar Pradesh, India; Gurgaon, Haryana, India; Hyderabad, Telangana, India; Bangalore, Karnataka, India; Indore, Madhya Pradesh, India
Experience: 6 to 8 years
Job Reference Number: 13024


Qualification

  1. 6–8 years of hands-on experience with Big Data technologies – PySpark (DataFrame and SparkSQL), Hadoop, and Hive

  2. Strong experience with Python and Bash scripting

  3. Solid understanding of SQL and data warehouse concepts

  4. Excellent analytical, problem-solving, and research skills

  5. Ability to think innovatively and solve problems beyond standard toolsets

  6. Strong communication, presentation, and interpersonal skills

  7. Hands-on experience with AWS Big Data services – IAM, Glue, EMR, RedShift, S3, Kinesis

  8. Experience with orchestration tools such as Apache Airflow and other job schedulers

  9. Experience in migrating workloads from on-premise to cloud or cloud-to-cloud environments


Skills Required

Python, PySpark, SQL


Role & Responsibilities

  1. Develop efficient ETL pipelines based on business requirements, adhering to development standards and best practices

  2. Conduct integration testing of pipelines in AWS environments

  3. Provide time and effort estimates for development, testing, and deployment activities

  4. Participate in peer code reviews to ensure code quality and standards compliance

  5. Build cost-effective AWS pipelines using services like S3, IAM, Glue, EMR, and Redshift

Jobs in other cities