Online or onsite, instructor-led live Apache Hadoop training courses demonstrate through interactive hands-on practice the core components of the Hadoop ecosystem and how these technologies can be used to solve large-scale problems.
Hadoop training is available as "online live training" or "onsite live training". Online live training (aka "remote live training") is carried out by way of an interactive, remote desktop. Onsite live Hadoop trainings in Leuven can be carried out locally on customer premises or in NobleProg corporate training centers.
NobleProg -- Your Local Training Provider
Leuven
Park Inn by Radisson Leuven, Martelarenlaan 36, Leuven, Belgium, 3010
Leuven
Louvain (in Dutch Leuven, in German Löwen) is a Dutch-speaking city in Belgium located ...
Leuven
Louvain (in Dutch Leuven, in German Löwen) is a Dutch-speaking city in Belgium located in the Flemish Region, capital of the province of Flemish Brabant and capital of the district that bears its name. It is watered by the Dyle, a tributary of the Rupel. It is a university city where the Katholieke Universiteit Leuven is located, a Dutch-speaking branch born from the split of the oldest university in Belgium. Leuven is also known for hosting the headquarters of AB InBev, the largest brewery in the world. Leuven is the beer capital of Belgium.
This instructor-led, live training in Leuven (online or onsite) is aimed at developers who wish to use and integrate Spark, Hadoop, and Python to process, analyze, and transform large and complex data sets.
By the end of this training, participants will be able to:
Set up the necessary environment to start processing big data with Spark, Hadoop, and Python.
Understand the features, core components, and architecture of Spark and Hadoop.
Learn how to integrate Spark, Hadoop, and Python for big data processing.
Explore the tools in the Spark ecosystem (Spark MlLib, Spark Streaming, Kafka, Sqoop, Kafka, and Flume).
Build collaborative filtering recommendation systems similar to Netflix, YouTube, Amazon, Spotify, and Google.
Use Apache Mahout to scale machine learning algorithms.
Datameer is a business intelligence and analytics platform built on Hadoop. It allows end-users to access, explore and correlate large-scale, structured, semi-structured and unstructured data in an easy-to-use fashion.
In this instructor-led, live training, participants will learn how to use Datameer to overcome Hadoop's steep learning curve as they step through the setup and analysis of a series of big data sources.
By the end of this training, participants will be able to:
Create, curate, and interactively explore an enterprise data lake
Access business intelligence data warehouses, transactional databases and other analytic stores
Use a spreadsheet user-interface to design end-to-end data processing pipelines
Access pre-built functions to explore complex data relationships
Use drag-and-drop wizards to visualize data and create dashboards
Use tables, charts, graphs, and maps to analyze query results
Audience
Data analysts
Format of the course
Part lecture, part discussion, exercises and heavy hands-on practice
Audience:
The course is intended for IT specialists looking for a solution to store and process large data sets in a distributed system environment
Goal:
Deep knowledge on Hadoop cluster administration.
Big data analytics involves the process of examining large amounts of varied data sets in order to uncover correlations, hidden patterns, and other useful insights.
The health industry has massive amounts of complex heterogeneous medical and clinical data. Applying big data analytics on health data presents huge potential in deriving insights for improving delivery of healthcare. However, the enormity of these datasets poses great challenges in analyses and practical applications to a clinical environment.
In this instructor-led, live training (remote), participants will learn how to perform big data analytics in health as they step through a series of hands-on live-lab exercises.
By the end of this training, participants will be able to:
Install and configure big data analytics tools such as Hadoop MapReduce and Spark
Understand the characteristics of medical data
Apply big data techniques to deal with medical data
Study big data systems and algorithms in the context of health applications
Audience
Developers
Data Scientists
Format of the Course
Part lecture, part discussion, exercises and heavy hands-on practice.
Note
To request a customized training for this course, please contact us to arrange.
The course is dedicated to IT specialists that are looking for a solution to store and process large data sets in distributed system environment
Course goal:
Getting knowledge regarding Hadoop cluster administration
Apache Hadoop is the most popular framework for processing Big Data on clusters of servers. In this three (optionally, four) days course, attendees will learn about the business benefits and use cases for Hadoop and its ecosystem, how to plan cluster deployment and growth, how to install, maintain, monitor, troubleshoot and optimize Hadoop. They will also practice cluster bulk data load, get familiar with various Hadoop distributions, and practice installing and managing Hadoop ecosystem tools. The course finishes off with discussion of securing cluster with Kerberos.
“…The materials were very well prepared and covered thoroughly. The Lab was very helpful and well organized”
— Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising
Audience
Hadoop administrators
Format
Lectures and hands-on labs, approximate balance 60% lectures, 40% labs.
Apache Hadoop is the most popular framework for processing Big Data on clusters of servers. This course will introduce a developer to various components (HDFS, MapReduce, Pig, Hive and HBase) Hadoop ecosystem.
Apache Hadoop is one of the most popular frameworks for processing Big Data on clusters of servers. This course delves into data management in HDFS, advanced Pig, Hive, and HBase. These advanced programming techniques will be beneficial to experienced Hadoop developers.
Audience: developers
Duration: three days
Format: lectures (50%) and hands-on labs (50%).
In this instructor-led training in Leuven, participants will learn the core components of the Hadoop ecosystem and how these technologies can be used to solve large-scale problems. By learning these foundations, participants will improve their ability to communicate with the developers and implementers of these systems as well as the data scientists and analysts that many IT projects involve.
Audience
Project Managers wishing to implement Hadoop into their existing development or IT infrastructure
Project Managers needing to communicate with cross-functional teams that include big data engineers, data scientists and business analysts
Hadoop is a popular Big Data processing framework. Python is a high-level programming language famous for its clear syntax and code readibility.
In this instructor-led, live training, participants will learn how to work with Hadoop, MapReduce, Pig, and Spark using Python as they step through multiple examples and use cases.
By the end of this training, participants will be able to:
Understand the basic concepts behind Hadoop, MapReduce, Pig, and Spark
Use Python with Hadoop Distributed File System (HDFS), MapReduce, Pig, and Spark
Use Snakebite to programmatically access HDFS within Python
Use mrjob to write MapReduce jobs in Python
Write Spark programs with Python
Extend the functionality of pig using Python UDFs
Manage MapReduce jobs and Pig scripts using Luigi
Audience
Developers
IT Professionals
Format of the course
Part lecture, part discussion, exercises and heavy hands-on practice
This instructor-led, live training in Leuven (online or onsite) is aimed at system administrators who wish to learn how to set up, deploy and manage Hadoop clusters within their organization.
By the end of this training, participants will be able to:
Install and configure Apache Hadoop.
Understand the four major components in the Hadoop ecoystem: HDFS, MapReduce, YARN, and Hadoop Common.
Use Hadoop Distributed File System (HDFS) to scale a cluster to hundreds or thousands of nodes.
Set up HDFS to operate as storage engine for on-premise Spark deployments.
Set up Spark to access alternative storage solutions such as Amazon S3 and NoSQL database systems such as Redis, Elasticsearch, Couchbase, Aerospike, etc.
Carry out administrative tasks such as provisioning, management, monitoring and securing an Apache Hadoop cluster.
This course introduces HBase – a NoSQL store on top of Hadoop. The course is intended for developers who will be using HBase to develop applications, and administrators who will manage HBase clusters.
We will walk a developer through HBase architecture and data modelling and application development on HBase. It will also discuss using MapReduce with HBase, and some administration topics, related to performance optimization. The course is very hands-on with lots of lab exercises.
Duration : 3 days
Audience : Developers & Administrators
In this instructor-led, live training in Leuven (onsite or remote), participants will learn how to deploy and manage Apache NiFi in a live lab environment.
By the end of this training, participants will be able to:
Install and configure Apachi NiFi.
Source, transform and manage data from disparate, distributed data sources, including databases and big data lakes.
In this instructor-led, live training in Leuven, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.
By the end of this training, participants will be able to:
Understand NiFi's architecture and dataflow concepts.
Develop extensions using NiFi and third-party APIs.
Custom develop their own Apache Nifi processor.
Ingest and process real-time data from disparate and uncommon file formats and data sources.
Apache Samza is an open-source near-realtime, asynchronous computational framework for stream processing. It uses Apache Kafka for messaging, and Apache Hadoop YARN for fault tolerance, processor isolation, security, and resource management.
This instructor-led, live training introduces the principles behind messaging systems and distributed stream processing, while walking participants through the creation of a sample Samza-based project and job execution.
By the end of this training, participants will be able to:
Use Samza to simplify the code needed to produce and consume messages.
Decouple the handling of messages from an application.
Use Samza to implement near-realtime asynchronous computation.
Use stream processing to provide a higher level of abstraction over messaging systems.
Audience
Developers
Format of the course
Part lecture, part discussion, exercises and heavy hands-on practice
Cloudera Impala is an open source massively parallel processing (MPP) SQL query engine for Apache Hadoop clusters.
Impala enables users to issue low-latency SQL queries to data stored in Hadoop Distributed File System and Apache Hbase without requiring data movement or transformation.
Audience
This course is aimed at analysts and data scientists performing analysis on data stored in Hadoop via Business Intelligence or SQL tools.
After this course delegates will be able to
Extract meaningful information from Hadoop clusters with Impala.
Write specific programs to facilitate Business Intelligence in Impala SQL Dialect.
Apache Ambari is an open-source management platform for provisioning, managing, monitoring and securing Apache Hadoop clusters.
In this instructor-led live training participants will learn the management tools and practices provided by Ambari to successfully manage Hadoop clusters.
By the end of this training, participants will be able to:
Set up a live Big Data cluster using Ambari
Apply Ambari's advanced features and functionalities to various use cases
Seamlessly add and remove nodes as needed
Improve a Hadoop cluster's performance through tuning and tweaking
Audience
DevOps
System Administrators
DBAs
Hadoop testing professionals
Format of the course
Part lecture, part discussion, exercises and heavy hands-on practice
This instructor-led, live training in Leuven (online or onsite) introduces Hortonworks Data Platform (HDP) and walks participants through the deployment of Spark + Hadoop solution.
By the end of this training, participants will be able to:
Use Hortonworks to reliably run Hadoop at a large scale.
Unify Hadoop's security, governance, and operations capabilities with Spark's agile analytic workflows.
Use Hortonworks to investigate, validate, certify and support each of the components in a Spark project.
Process different types of data, including structured, unstructured, in-motion, and at-rest.
Read more...
Last Updated:
Testimonials (8)
I liked the virtual machine environments because he could easily toggle between the views and help if we were struggling with the material.
Pedro
Course - Apache NiFi for Developers
The fact that we were able to take with us most of the information/course/presentation/exercises done, so that we can look over them and perhaps redo what we didint understand first time or improve what we already did.
Raul Mihail Rat - Accenture Industrial SS
Course - Python, Spark, and Hadoop for Big Data
Trainer's preparation & organization, and quality of materials provided on github.
Mateusz Rek - MicroStrategy Poland Sp. z o.o.
Course - Impala for Business Intelligence
practical things of doing, also theory was served good by Ajay
Dominik Mazur - Capgemini Polska Sp. z o.o.
Course - Hadoop Administration on MapR
The VM I liked very much
The Teacher was very knowledgeable regarding the topic as well as other topics, he was very nice and friendly
I liked the facility in Dubai.
Safar Alqahtani - Elm Information Security
Course - Big Data Analytics in Health
I thought he did a great job of tailoring the experience to the audience. This class is mostly designed to cover data analysis with HIVE, but me and my co-worker are doing HIVE administration with no real data analytics responsibilities.
ian reif - Franchise Tax Board
Course - Data Analysis with Hive/HiveQL
I genuinely enjoyed the many hands-on sessions.
Jacek Pieczątka
Course - Administrator Training for Apache Hadoop
The fact that all the data and software was ready to use on an already prepared VM, provided by the trainer in external disks.
Online Hadoop training in Leuven, Apache Hadoop training courses in Leuven, Weekend Hadoop courses in Leuven, Evening Hadoop training in Leuven, Hadoop instructor-led in Leuven, Apache Hadoop one on one training in Leuven, Evening Hadoop courses in Leuven, Hadoop on-site in Leuven, Hadoop instructor in Leuven, Weekend Apache Hadoop training in Leuven, Apache Hadoop instructor-led in Leuven, Online Apache Hadoop training in Leuven, Hadoop private courses in Leuven, Apache Hadoop classes in Leuven, Apache Hadoop trainer in Leuven, Hadoop boot camp in Leuven, Hadoop coaching in Leuven