Data Engineer

​We hire people with all kinds of awesome experiences, backgrounds, and perspectives. We like it that way. So even if you don’t meet every single requirement, please consider applying if you like what you see.

Key Responsibilities

  • Acts as a technical leader by making significant technical contributions to the planning and implementation of mid- to large-scale projects from conception to completion.
  • Demonstrates versatility of knowledge and applies strong technical skill sets to architect and implement cloud-based data solutions.
  • Validates requirements, performs business and technical analysis, designs cloud-native applications, writes optimized PySpark-based data pipelines, and contributes to end-to-end application development.
  • Ensures compliance with coding standards and best practices for AI-assisted code generation, suggesting opportunities for improvement.
  • Utilizes GenAI tools like Cursor, Claude, and other LLMs to decompose complex requirements and auto-generate UI, API, and database scripts for rapid development.
  • Acquires and utilizes strong technical and application knowledge to introduce and forecast the impact of new software design patterns, AI-driven development workflows, and emerging cloud technologies.
  • Shares information and expertise to improve team productivity and empower engineers with AI-driven automation.
  • Acts as a subject matter expert (SME) within the organization, helping resolve complex technical issues related to cloud data engineering, distributed computing, and microservices architecture.
  • Mentors team members and fosters collaborative learning environments, particularly in areas related to GenAI, Azure, PySpark, Databricks, and CI/CD automation.
  • Leads multiple projects within the team, ensuring seamless integration of AI-driven development into software delivery pipelines.

Preferred Strengths

  • Proficiency in GenAI-powered development, leveraging AI tools for code completion, optimization, and automated unit testing.
  • Strong experience with SQL, relational databases, and GIT, including building and release definitions within a CI/CD environment.
  • Expertise in Microsoft Azure cloud services, including Azure Data Factory, Azure Functions, and Azure Databricks.
  • Familiarity with serverless architectures, containerization (Docker/Kubernetes), and big data frameworks like Apache Spark.

Requirements, Skills, and Knowledge

  • 7+ years of software engineering experience building enterprise applications using Python, .NET.
  • Proven ability to analyze, decompose, and automate requirement-based development using GenAI.
  • Strong experience in building and deploying cloud-based data processing applications using PySpark and Azure Databricks.
  • Hands-on expertise in automating and optimizing cloud infrastructure using Terraform, Bicep, or ARM templates.
  • Experience implementing business-critical database applications, REST microservices, and real-time event-driven architectures using Kafka, Orkes, and Databricks.
  • Deep knowledge of SDLC methodologies (Agile, DevOps) and experience leading technical implementations from architecture to production deployment.
  • Proficiency in coding standards, code reviews, source control management, build processes, testing, and operations.
  • Experience integrating AI-based automation for improving code quality, predictive analytics, and data-driven decision-making.
  • While Wealth Management domain experience is a plus, it is not mandatory. Candidates should be willing to learn/master the domain through on-the-job experience.