DescriptionWe have an exciting and rewarding opportunity for you to take your software engineering career to the next level.
As a Software Engineer III at JPMorgan Chase within the Corporate Technology Finance and Risk Warehouse SRE Team, you will solve complex and broad business problems with simple and straightforward solutions. Through code and cloud infrastructure, you will configure, maintain, monitor, and optimize applications and their associated infrastructure to independently decompose and iteratively improve on existing solutions
Job responsibilities
- Guides and assists others in the areas of building appropriate level designs and gaining consensus from peers where appropriate
- Collaborates with other software engineers and teams to design and implement deployment approaches using automated continuous integration and continuous delivery (CI/CD) pipelines
- Collaborates with other software engineers and teams to design, develop, test, and implement availability, reliability, scalability, and solutions in their applications
- Implements infrastructure, configuration, and network as code for the applications and platforms in your remit
- Collaborates with technical experts, key stakeholders, and team members to resolve complex problems
- Understands service level indicators and utilizes service level objectives to proactively resolve issues before they impact customers
- Supports the adoption of site reliability engineering best practices within the team
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and 3+ years applied experience
- Strong analysis, research, investigation, and evaluation skills, with a structured approach to problem solving.
- Specialized ETL knowledge in Spark
- Experience with monitoring and observability tools, including Dynatrace, Open Telemetry (OTEL), Prometheus, Datadog, and Grafana, particularly in dashboard development
- Proficient in at least one programming language such as Python, Java/Spring Boot, Scala, and/or .Net
- Working knowledge of Kubernetes, Dockers, any other containers technology
- Experience managing and developing/deploying on Cloud (private cloud or public cloud)
- Knowledge of GIT, BitBucket, Jenkins, SONAR, SPLUNK, Maven, AIM and Continuous Delivery tools
- UNIX file management & administration and good shell scripting experience
- Production working knowledge of Databricks and Apache Airflow on AWS
- Willing to work weekend support
Preferred qualifications, capabilities, and skills
- Developing/deploying and running Ab Initio (ETL Tool) on a public Cloud like AWS
- AWS and/or Databricks certification
- Experience developing and running data pipelines using PySpark
- Oracle (v9i/10/11/19c ) running on Exadata, Ansi SQL, PL /SQL Stored Procedures support/development
- Working Knowledge of Control-M/Autosys scheduling package
- Knowledge/experience in Hadoop environment administration, release deployments to Hive/HBase, supervising Hadoop jobs, performing cluster coordination services