Responsibilities:
-Support senior team members in designing and implementing data flow pipelines, fetching data from various sources such as APIs, CSV files, JSON, XMLs, SFTP, AMQP, RabbitMQ, and web scraping.
-Assist in manipulating and transforming raw data into a desirable format, ensuring data quality and consistency.
-Learn and contribute to implementing data storage solutions using databases like MySQL, MS SQL, PostgreSQL, BigQuery, Firestore, etc.
-Assist in designing and implementing basic APIs to serve data from databases or facilitate data insertion.
-Work closely with senior team members on database tasks such as querying, optimization, and maintenance.
-Utilize technologies such as Pandas, NumPy, FastAPI, SQLModel, SQLAlchemy, and Requests for data handling and processing.
-Learn and gain experience with Google Cloud Platform (GCP) and Azure for cloud platform experience.
Requirements:
-Bachelor's degree in Computer Science, Data Science, or related field.
-Knowledge of Python programming with an interest in data-related libraries and frameworks.
-Basic SQL skills for database management.
-Strong problem-solving and analytical skills.
-Passion for continuous learning and staying updated on industry trends.
-Excellent communication skills and the ability to learn from and collaborate with senior team members.
Nice to Have:
-Familiarity with additional data-related libraries and tools.
-Interest in ML/AI models.
-Strong communication skills and the ability to convey technical concepts to non-technical stakeholders.
-Experience with web scraping.
Role: Database Administrator
Industry Type: IT Services & Consulting
Department: Engineering - Software & QA
Employment Type: Full Time, Permanent
Role Category: DBA / Data warehousing
Education
UG: B.Sc in Any Specialization
PG: Any Postgraduate
Doctorate: Doctorate Not Required