Required single dedicated part time developer on below skills At least 3 years of experience working in Hadoop Ecosystem and big data technologies Build data pipelines and ETL using heterogeneous sources to Hadoop using Kafka, Flume, Sqoop, Spark Streaming etc. Experience in batch (Spark. Scala) or real time data streaming (Kafka) Ability to dynamically
... who can have min 2+ years working knowledge on below skill, Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala. • Experience with Spark, Hadoop, MapReduce, HDFS. • Knowledge of various ETL techniques and framework...
...program and deliver a collaborative client server tool that will provide functionality across multip;le database platforms such as Oracle, SQL Server, Postgress, MySQL and Hadoop Hive, Cassandra etc. Delivery Time - 30 days Build a Data Mapping Program using Erwin API. Dont bid if you dont have API Experience ======================================
I need Spark 2.3 streaming write to Hbase in EMR for IoT sensor
Skills: • 4-8 years of experience on Big Data platform like Hadoop, Map/Reduce, Spark, HBase, CouchDB, Hive, Pig etc. • Experienced with data modeling, design patterns, building highly scalable and secured analytical solutions • Used SQL, PL/SQL and similar languages, UNIX shell scripting • Worked with TeraData, Oracle, MySQL, Informatica, Tableau
Hi Everyone, This is Ritika, I need experienced interview supporters for Hadoop technology. [Removed by Freelancer.com Admin - please see Section 13 of our Terms and Conditions] Feel free to reach me anytime. Thank you.
Movie recommendations done by spark scala/python ML libraries and the final output is stored in Hbase table.
...HDFS, and also can't consume the data from HDFS/ S3 using Spark, due to error on the miss of hadoop client. HDFS Cluster and Spark are set up and show healthy states. Client's [log masuk untuk melihat URL] and [log masuk untuk melihat URL] are also downloaded and placed under local hadoop installation, but. 1) 'hdfs dfs -ls /' lead to error 2) spark comp...
I can provide support for Hadoop/Big Data Administration and help you with ongoing projects. - Well Experienced in Big Data (Hadoop | Kafka | Spark | NoSQL | Elasticsearch | Cloud) Administration and Platforms to Accommodate the Expanding Business Needs. - Well Experienced in Different Vendors (Hortonworks, Cloudera, MapR, etc) of Big Data and Cloud
... These acquired feature stored in Featured Vector will be further processed. iv. We will probably get efficient results through the Featured Vector stored in HDFS format (Hadoop Distributed File Format). v. During Brain Tumor classification, we will apply a classifier on featured vector, those will be input for the classifier. vi. Specified classification
This is a small part of a big project. Need to have complete knowledge of Hadoop, How the name node and data node read-write functionality work, Creating a heterogeneous cluster in the cloud(open stack is desirable), push the change to the Hadoop. Desirable language JAVA. help is provided when asked. We do have only 2 weeks time for this. Write operation
Predict the loan_status(0 or 1) for the approved loans data. Based on the data given Project should be done using Hadoop Map reduce and Logistic regression If you cannot do using logistic regression then choose any other supervised machine learning technique. Train and Test data are given. Use train data to train and test data to test Use Python or
...help in building a cybersecurity analytics platform using Apache Stack such as Nifi (Log Collector), Kafka (communication), Storm (Real-time stream computation system), Strata Hadoop (S3), Metron (Analytics framework), Elastic search and for monitoring (Zabbix, Grafana for visualisation, ElastAlert for Platform alerting). I'm open to hearing other ideas
Build a website which stores data into the cloud. Also needs to have knowledge of Hadoop (HDFS). Need to have a proper understanding of the name-node and data-node functionality. Message me for detailed project description.
Job Description: we are looking for a data governance developer Should have good knowledge of schema, evolution, data leniance, hadoop java spark, experience on spark A,orc, protocol burst, Should know how to design Schema evolution,implementation of Apache center security, Should be strong in metadata manager, data lenience, data prominence.
Somos una empresa llamada Beedata, especialista en tratamiento de datos big data, que trabajamos con hadoop para el sector energético. Buscamos una persona de consultooría especialista en spark, ya que para un cliente estamos diseñando la arquitectura de un data lake, y necesitams apoyo en el diseño de la parte de spark. Por lo tanto buscamos a un arquitecturista
...schema evolution concepts, data provenance etc Job Description: we are looking for a data governance developer Should have good knowledge of schema, evolution, data leniance, hadoop java spark, experience on spark A,orc, protocol burst, Should know how to design Schema evolution,implementation of Apache center security, Should be strong in metadata manager
...file format. I would like to find out what the Maximum value of speed in a file and mean speed for each borough. The project is not very complex, but it must be done using Hadoop (Big data) and Map Reduced code should be provided. I attached Codes for weather dataset. Mine should be similar but instead Station ID in mine would be borough and temperature
...results (the final centers and the assignments for each centers). After you’ve finished the first question, visualize your result using a 2-d plot. Testing to be done using hadoop NOTE: the deadline for this project is tuesday 11/25/18 EST 10:00 am...
Hello Trainees, ...Trainees, I am a Big Data Hadoop Trainer. Currently some batches is going on in Concept Solutions Pvt. Ltd at Indore. I am providing the Big Data Hadoop training as Job Oriented with one trending project on Big Data Hadoop. So whom are interested to become a DATA SCIENTIST, just let me know. Thanks Moin Khan Big Data Hadoop Trainer
I have k means map reduce code which runs for 2 clusters but I want to make changes to get it work for different k values. I need help to get it worked.
I am looking for someone who is expert in Hadoop Map Reduce, Machine Learning and Python/java to help me in Loan default prediction with Map Reduce using any machine learning model and Java/Python programming language in Hadoop cluster. Loan default prediction is to predict whether the customer is going to pay the loan back or not based on the data
Please read the the document attached. It contains all the details of this project. The project is not very complex, but it should be done using Hadoop and adequate documentation MUST be provided.
We need a single dedicated part time resource on Hadoop, Python(expert), Pyspark,AWS, Nifi (optional) to give support for US client on weekdays morning around 90 minutes IST 6 00 am to 8 00 am will provide 22000 per month minimum 4+ years of experience candidates only eligible for the bid.
My work is related to Medical Images Like MR...detect and extract brain tumor. Due the large scale size of those images the storage and processing becomes cumbersome task. So my proposed work is to store those images in HADOOP HDFS and apply SVM algorithm to classify tumor whether it is Benign and Malignant tumor. so I want a developer who code for me