Big Data Developer -Job code 005

Ideal candidates would have working knowledge of Hadoop ecosystem, relational data stores, Data Integration techniques, XML, Python, Spark, SAS, R, emerging Big Data tools and technologies, Visualization tools in Big Data, ETL techniques in Hadoop and AWS ecosystem. Projects leverage Agile methodology for enabling new business capabilities

▪ Requirements/Critical Skills (4-6 years working experience):
Able to work efficiently under UNIX/Linux environment
Experience working with in-memory computing using R, Python, PySpark and Scala.
Experience in parsing and shredding XML and JSON, shell scripting, and SQL
Experience working with SQL (DB2, SQLServer, Sybase, Oracle) and No SQL databases (MongoDB, DynamoDB)
Experience designing and developing data sourcing routines utilizing typical data quality functions involving standardization, transformation, rationalization, linking and matching
Knowledge of data, master data and metadata related standards, processes and technology
Experience working with multi-Terabyte data sets (structured, semi/unstructured data)
Knowledge of job scheduling and monitoring tools like Oozie and Autosys
Experience with DevOps, CI/CD implementations of the following technologies: Docker or Jenkins or Test Driven Development patterns
Preferred Skills:
Experience in the Financial, and information management needs are a plus
Demonstrated flexibility and adaptability to changes – Agile methodology experience with Jira, VersionOne
Demonstrated ability to manage multiple priorities and deadlines
Ability to work independently or as part of a team
Cloud infrastructure experience working with one or more of the following Amazon Web Services (AWS) Cloud services is a plus: EC2, EMR, ECS, S3, SNS, SQS, Cloud Formation, Cloud watch, Lambda

Share the profiles to [email protected]

Job Category: Big Data Developer
Job Type: Full Time
Job Location: VA

Apply for this position

Allowed Type(s): .pdf, .doc, .docx