Technical Solution Architect - Databricks
Locations: Bengaluru, Karnataka; Indore, Madhya Pradesh; Pune, Maharashtra; Hyderabad, Telangana
Experience: 10 to 18 years
Job Reference Number: 12932
Qualifications
Overall 10–18 years of experience in Data Engineering.
Minimum 4+ years of hands-on experience specifically with Databricks.
Willingness to travel onsite and work at client locations.
Proven experience as a Databricks Architect or in a similar role, with deep understanding of Databricks capabilities.
Ability to analyze business requirements and translate them into technical specifications for data pipelines, data lakes, and analytical processes.
Expertise in designing end-to-end data solutions: ingestion, storage, transformation, and presentation layers.
Proficiency in setting up, configuring, and optimizing Databricks clusters, workspaces, and jobs.
Strong skills in managing access controls and security configurations for data privacy and compliance.
Experience designing and implementing ETL workflows and data pipelines from various sources into Databricks.
Optimization of ETL processes for data quality and latency reduction.
Capability to monitor and optimize query performance and platform performance.
Ability to identify and resolve performance bottlenecks in Databricks.
Knowledge of best practices, standards, and guidelines to ensure data quality, consistency, and maintainability.
Experience implementing data governance and lineage processes.
Ability to mentor team members and conduct knowledge-sharing sessions and workshops.
Participation in Databricks practice technical/partnership initiatives.
Capability to build technical skills supporting deployment and integration of Databricks-based solutions.
Skills Required
Databricks
Unity Catalog
PySpark
ETL
SQL
Delta Live Tables
Role and Responsibilities
Bachelor's or Master’s degree in Computer Science, IT, or related field.
Strong hands-on expertise in Databricks, including Delta Lake, Delta Tables, cluster configuration, and policies.
Experience handling both structured and unstructured datasets.
Proficient in Python, Scala, and SQL programming.
Knowledge of cloud platforms like AWS, Azure, or GCP and cloud-based storage/computing.
Familiar with big data technologies such as Apache Spark, Hadoop, and data lake architectures.
Develop and maintain ETL workflows and data pipelines on Databricks.
Experience with Databricks Batch processing and Streaming.
Ability to create workflows and schedule pipelines efficiently.
Knowledge of making packages/libraries available in Databricks.
Familiarity with Databricks default runtimes.
Databricks Certified Data Engineer Associate/Professional Certification is desirable.
Experience working in Agile methodology environments.
Strong communication skills (both verbal and written).
Excellent analytical and problem-solving abilities with attention to detail.