If you are interested please share with me your Updated resume also share this email with your colleagues those are looking change
Title : Hadoop Developer
Location : NY
Duration : Full Time
Job Discirption
The ideal candidate will join an expanding, yet carefully curated team of seasoned data operators engaged in large-scale analytics, data science, personalization and optimization. As a member of the Data Team, you will be at the epicenter of our most powerful data assets and innovations. Projects will range from developing large data pipelines, informing computation and processing paradigms as well as your not-so-standard deep dive investigations, and all-purpose health-wise data wrangling.
Responsibilities:
Work with internal stakeholders and outside vendors to manage and oversee our Hadoop cluster and cloud computing infrastructure
Build and maintain highly scalable data pipelines using the Hadoop ecosystem and open source components
Use cloud computing best practices to enhance our traditional business intelligence capabilities and enable new capabilities for next-generation recommendation systems, predictive analytics platforms and real-time delivery solutions
Design, automate, and execute performance analysis/benchmarking, QA, and capacity planning
Integrate HDFS data with other RDMS/application assets; implement reliable and durable connections from Hadoop to other virtual/physical environments and platforms
Support Business Intelligence and Data Science teams with data preparation, data profiling, analytics, and insight
Document processes and applications, author best practice standards as they pertain to existing and future Hadoop environments
Continuously evaluate and vet updates to distributed Hadoop and related Apache open source components
Minimum 2 years of relevant industry experience
Hands-on experience developing Hadoop applications
Demonstrated proficiency in coding MapReduce jobs using Pig, Hive, or Java
A strong understanding of the Hadoop framework, core components and distributed computing concepts
Solid experience with data modeling (physical and logical) and authoring workflow processes from soup to nuts
More than ample knowledge of Unix/Linux (Red Hat) OS; user and system security as it pertains to running Hadoop and configuring requisite components in a cloud setting
Confident working with large amounts of heterogeneous data sources; flexible across formats and data sourcing methods; nimble with data manipulation and data shaping
Working knowledge of Oozie workflow/coordinator/bundle creation and management
Working knowledge of ZooKeeper and High Availability concepts
Experience with Hadoop-RDMS integration; processing data with Sqoop or similar
Well versed in at least one RDMS (SQL Server, Oracle, MySQL); strong SQL coding ability
Strong conceptual, analytical, problem solving, decision making and planning skills
Confident self-starter
Strong written and oral communication skills
Minimum BS in Computer Science, Engineering and/or Mathematics
Desirables
Proficiency in at least one: Java, Python or Ruby
Experience with R or SAS modeling environments; preparing data for algorithmic R&D
Exposure to Hadoop software distributions such as Cloudera, Hortonworks, or MapR
Solid understanding of Amazon elastic cloud computing paradigms, limitations, and security
Experience with any of the following: Mahout, HBase, Spark, Solr, Impala, NoSQL databases
Dear Sir, We claim to get it done perfectly for you EXACTLY in the way you want it - Kindly give we a chance and we will prove myself - Ready to prove our words,
let's get it done right away and I mean RIGHT AWAY !! Looking forward to hear from you soon - GOD Bless You.
HI there,
I have been working Node.js and Redis db since last 3 years and during my professional career I have done so many projects in Node and Redisdb.I have created two big applications . If you are intereste please let me know.
Thanks.
We are a team of 3 Java Developers with me working as a team leader and each of us has around 2 years of industry experience. I have got industry experience of working in Teradata where I worked on cricket analysis search engine that gave me good exposure on development of hadoop where my key responsibilities were to write Hive scripts, Pig Scripts and Java MapReduce Jobs. This project also exposed me to analyze transformed data using statistical formulas on R to get meaningful results.
I am certified as Hortonworks Hadoop Developer 1.X and Teradata 14 certified professional. We are starting up as freelancers and this is the project that best meets our skills. Reputation on freelancer.pk is our top objective so we would try our best to deliver quality and on time work.
Bid is the rate/hour
Is this a job offer. It sounds like one.
I am not clear on this. I am looking for onsite job opportunity.
Let me know your mail id , will send my resume.