With our 16hour course, you can learn the fundamentals of data science with Python, whether working on your laptop or a big data cluster, using numpy, pandas and pySpark. Communicate your analyses using informative graphics from matplotlib and seaborn. Configure, train and assess machine learning models with scikit-learn.
Development on contemporary machine learning and cluster computing frameworks is geared towards Python. Even when Python is not a framework’s primary API, there is always a python binding. A notable example is Spark whose primary API is Scala but is most often used through its Python binding.
Python is also the language most data scientists prefer for desktop data analysis. There’s a rich set of ready-made ML algorithms and libraries to pull data from and push to large storage backends in different formats, making the whole process of Exploratory Data Analysis (EDA) effective and easy.
This course is a 3-day hands-on lab on Python's numpy, pandas, pySpark, matplotlib, seaborn and scikit-learn packages, a de facto data scientist's toolset standard. Along the way we’ll test our knowledge with exercises using real-life datasets from Kaggle and elsewhere.
An important takeaway for all the participants is the Pyhon notebooks used in the course that will serve as a valuable reference for their future tasks.
Upon course completion, the participants will know
- the essential statements, constructs and idioms of Python and how to develop and share their code using Jupyter notebooks.
- the basics of numpy and pandas libraries for querying in-memory tabular data.
- how to visualize the outcomes of data analyses using matplotlib and seaborn.
- how to process data on large clusters using PySpark
- setup and assess machine learning models with scikit-learn.
Who should attend
- Software engineers who want to make a transition to data science practice.
- Data scientists who want to learn about the Python data-analysis and machine learning toolset.
- Business Analysts who want to make an evolutionary leap to big data analytics.
- Technical managers involved in the evaluation of technologies and human resources or in strategies utilizing big data in the framework of related enterprise policies.
- Knowledge of installing and configuring computer software.
- Understanding of computer programming concepts.
- General knowledge of data formats and data transformations (filtering and reduction).
- Knowledge of basic descriptive statistics is helpful but not mandatory.
- Have a laptop with Ubuntu 16.04 or windows 10 OS, at least 4GB RAM and 32GB disk storage.
- Development front-ends: jupiter console, jupyter notebook and qtconsole
Using the command history
Interacting with the OS
The interactive debugger
- Python bootcamp
Literals, expressions and statements
Python containers, comprehensions and generator expressions
Function objects, lambdas and closures
- Fast array calculations with the numpy package
The ndarray object
Integer and boolean Slicing
- Tabular data management with the pandas package
Indexing, selection and filtering
Function application and mapping
Data filtering and reductions
Handling missing data
- Cluster computing with Spark and PySpark
Installing and configuring Spark over Spark’s standalone cluster
pyspark.Dataframes and untyped operations
running SQL programmaticaly
schema objects and types
- Plotting with matplotlib and Seaborn packages
Matplotlib API primer
Figures, subplots, axes, lines and markers
Line and bar plots
Histograms and density plots
- Introduction to scikit-learn
Gradient boosted trees
|Days and Hours||Start Date||End Date||Total Hours|