Designs, develops, modifies, adapts and implements solutions to information technology needs through new and existing applications, systems architecture and applications infrastructure.
Reviews system requirements and business processes codes, tests, debugs and implements software solutions.
Responsibilities Optimize data lake performance by identifying and resolving production and application development problems.
Act as Data Steward, ensuring Data Governance best practices with developers.
Help investigate and resolve data anomalies including data quality issues and ambiguous data definitions.
Recommend data integrity checks and controls to ensure data quality.
Define datainformation architecture standards, structure, attributes and nomenclature of data elements, and applies accepted data content standards to technology projects.
Develop data retention practices and system lifecycle for Nasdaqs index applications based on Nasdaqs data retention policies.
Hands-on ability to set up reporting tools and build reports and ad hoc functionality, giving users access to their data.
Experience with master data management, data governance, data security, data quality and related tools desired.
Create and maintain optimal data pipeline architecture.
Assemble large, complex data sets that meet functional non-functional business requirements Identify, design, and implement internal process improvements automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies.
Work with QA Test analyst to ensure test coverage (Including Integration Regression testing)
Develop new program logic andor assembles standard logic modules to create new applications
Good understanding of AWS Open Source big data technologies Airflow, Terraform, AWS Glue, AWS Lambda, Spark SQL Ability to implement both batch and streaming data pipelines in the AWS and change data capture(CDC) experience Experience with Databrickspreferable Programming SPARK in Scala proficiency in SQL to write complex SQL queries Strong data analysis and troubleshooting skills Domain knowledge of Capital Marketis plus Knowledge of shell scripts and other languages including Python, R, Java is plus At least 2 years of hands on experience on data engineering programs on AWS Minimum 1 year experience with Spark
Scala Experience with relational databases (Preferably Oracle)
Good knowledge of Linux OS and Shell scripting Experience working on complex distributed information systems Experience with version control systems, preferable SVN, git.
Strong work ethic in a mission-critical 24x7 diverse environment with multiple vendorcustomer relationships.
Come as You Are
Nasdaq is an equal opportunity employer.
We positively encourage applications from suitably qualified and eligible candidates regardless of age, color, disability, national origin, ancestry, race, religion, gender, sexual orientation, gender identity andor expression, veteran status, genetic information, or any other status protected by applicable law.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment.
Please contact us to request an accommodation.
Report This Job
Jobs For You
How much should you be earning?
Are you getting paid fairly? Find out how much AWS Data Lake Big Data Engineer Spark SQL Data Steward Scala Terraform Data Bricks are earning in Maryland.