# Coursera Data Science Specialization Review

Table of Contents

# “Ask the right questions, manipulate data sets, and create visualizations to communicate results.”

“This Specialization covers the concepts and tools you’ll need throughout the entire data science pipeline, from asking the right kinds of questions to making inferences and publishing results. In the final Capstone Project, you’ll apply the skills learned by building a data product using real-world data. At completion, students will have a portfolio demonstrating their mastery of the material.”

The JHU Data Science Specialization is one of the earliest MOOC that has been available online with the Machine Leaning from Andrew NG and The Analytic Edge on EDX.

The data science specialization consists of 9 courses and one final capstone project.

Each course is a combination of video with graded quizzes, and peer graded projects. The list of the courses is as follows:

**Course 1: The Data Scientist’s Toolbox**

**Course 2: R Programming**

**Course 3: Getting and Cleaning Data**

**Course 4: Exploratory Data Analysis**

**Course 5: Reproducible Research**

**Course 6: Statistical Inference**

**Course 7: Regression Models**

**Course 8: Practical Machine Learning**

**Course 9: Developing Data Products**

**Course 10: Capstone project**

So far I’ve completed the 9 courses and I’m still working on the final capstone project.

Here is my overall review about the data science specialization:

## Strengths

- The first courses are very easy and you don’t need a data science or a heavy math background to complete the different courses, however, having a descent programming skills and good statistics background will be an advantage.
- The specialization uses R, Github and Rpubs all those tools are compeltely free. R is nowadays one of the most popular statistical language with Python and SAS (very expensive). Also there’s a really big community supporting R.
- The specialization covers a broad of different topics such as R programming, statistics inference, exploratory data analysis, reproducible research and machine learning.
- Each course contains at least one project and this is where you get to learn the most. I always found that the moment I learn the most is actually when I take a test even if I fail or when I work on a real project.

## Weaknesses

- Because the specialization is intended to a public with no heavy math background and no previous exposure to R the courses were a bit slow at the beginning.
- In the other hand if you’re not familiar with statistical inference you might find yourself struggling to understand some concepts as the professor Brian Caffo tends to go a bit fast on some essential notion of statistics.
- The price, 37£/ month so the quicker you finish the cheaper it costs, the price is still affordable but the first courses are definitely not worth it as you can just dowload the Siwrl package in R and follow the tutorial however if you want the final certificate you do need to complete all the 9 courses and the final project. You can still audit the courses free, you’ll have access to all the videos but you won’t have access to the project homework which is the best part of this MOOC.
- Finally, the main drawback of this MOOC is the peer grade assignment some students take it very seriously and review your work properly and give a good feedback where as some students don’t even bother reviewing your work.

## Brief overview of each course

### Course 1: The Data Scientist’s Toolbox

#### Review

This course is a big joke they shouldn’t charge for it, if you know how to use github and install R you’re done…

### Course 2: R Programming

In this course you will learn how to program in R and how to use R for effective data analysis. You will learn how to install and configure software necessary for a statistical programming environment and describe generic programming language concepts as they are implemented in a high-level statistical language. The course covers practical issues in statistical computing which includes programming in R, reading data into R, accessing R packages, writing R functions, debugging, profiling R code, and organizing and commenting R code. Topics in statistical data analysis will provide working examples.

#### Review

If you already have a programming background and you understand the concept of vector, matrix and data.frame manipulation this course will be really easy. However if you’re not familiar with programming or don’t know R at all this course is definitely worth it.

### Course 3: Getting and Cleaning Data

Before you can work with data you have to get some. This course will cover the basic ways that data can be obtained. The course will cover obtaining data from the web, from APIs, from databases and from colleagues in various formats. It will also cover the basics of data cleaning and how to make data “tidy”. Tidy data dramatically speed downstream data analysis tasks. The course will also cover the components of a complete data set including raw data, processing instructions, codebooks, and processed data. The course will cover the basics needed for collecting, cleaning, and sharing data.

#### Review

Well this course teaches the essential knowledge of reading and cleansing data. In this course you get exposed to the dplyr package which is I think one the most popular and important package to master. However when ever you want to read a specific file or do a specific string manipulation in R you just google it and you find the answer so no need to watch dozens and dozens video for it. Not worth it.

### Course 4: Exploratory Data Analysis

This course covers the essential exploratory techniques for summarizing data. These techniques are typically applied before formal modeling commences and can help inform the development of more complex statistical models. Exploratory techniques are also important for eliminating or sharpening potential hypotheses about the world that can be addressed by the data. We will cover in detail the plotting systems in R as well as some of the basic principles of constructing data graphics. We will also cover some of the common multivariate statistical techniques used to visualize high-dimensional data.

#### Review

I really liked this course and it’s definitely worth it. First ggplot is a must in R, plotting data is where to start in Data Science if you want to analyse data and start making assumption ggplot is your guy. In addition to ggplot you’ll get exposed to the K-means algorithm (clustering algorithm) and the PCA (dimensions reduction algorithm) and Brian skips all the math. You’ll see PCA again the course 8 “Machine Learning” but still the course will skip the core math and will not go deep enough to really understand its concept.

### Course 5: Reproducible Research

This course focuses on the concepts and tools behind reporting modern data analyses in a reproducible manner. Reproducible research is the idea that data analyses, and more generally, scientific claims, are published with their data and software code so that others may verify the findings and build upon them. The need for reproducibility is increasing dramatically as data analyses become more complex, involving larger datasets and more sophisticated computations. Reproducibility allows for people to focus on the actual content of a data analysis, rather than on superficial details reported in a written summary. In addition, reproducibility makes an analysis more useful to others because the data and code that actually conducted the analysis are available. This course will focus on literate statistical analysis tools which allow one to publish data analyses in a single document that allows others to easily execute the same analysis to obtain the same results.

#### Review

The course teaches how to use Rmarkdown and other tool/languages to write and publish documents which contain data analysis. I found Rmarkdown really handful and if you want to share your work with the comunity on Rstudio, Rpubs or Kaggle, Rmarkdown is a must. So I found this course quite useful as well.

### Course 6: Statistical Inference

Statistical inference is the process of drawing conclusions about populations or scientific truths from data. There are many modes of performing inference including statistical modeling, data oriented strategies and explicit use of designs and randomization in analyses. Furthermore, there are broad theories (frequentists, Bayesian, likelihood, design based, …) and numerous complexities (missing data, observed and unobserved confounding, biases) for performing inference. A practitioner can often be left in a debilitating maze of techniques, philosophies and nuance. This course presents the fundamentals of inference in a practical approach for getting things done. After taking this course, students will understand the broad directions of statistical inference and use this information for making informed choices in analyzing data.

#### Review

This course is a big disappointment!

Statistical inference are really fundamental in Data Science, JH University tried to fill this course in four weeks and as a result the course is completely botched.

That’s a big shame that’s they try to pack this course in four weeks they could heave easily split this course in two course and get rid of data product or data science toolbox instead.

Luckily I have a degree with minor in Statistics so I didn’t struggle with the exams however if you’re not familiar with statistical inference I would definitely recommend you to study with another material. (Foundations of Data Analysis part 1 & 2 on EDX could be a good one as it’s using R as well and it’s completely free!)

### Course 7: Regression Models

Linear models, as their name implies, relates an outcome to a set of predictors of interest using linear assumptions. Regression models, a subset of linear models, are the most important statistical analysis tool in a data scientist’s toolkit. This course covers regression analysis, least squares and inference using regression models. Special cases of the regression model, ANOVA and ANCOVA will be covered as well. Analysis of residuals and variability will be investigated. The course will cover modern thinking on model selection and novel uses of regression models including scatterplot smoothing.

#### Review

Again there’s no chance you can get a solid grasp of regression model with this course. Too short the coverage of regression model is far from complete. it tells you how to run a linear or log regression in R and tell only a little bit about the interpretation and optimization of a model. However this time there were few optional videos will all the math involved behind the algorithm I think they should add these optional video for every single algorithms for the people who would like to go deeper or just enjoy the magic of math.

### Course 8: Practical Machine Learning

One of the most common tasks performed by data scientists and data analysts are prediction and machine learning. This course will cover the basic components of building and applying prediction functions with an emphasis on practical applications. The course will provide basic grounding in concepts such as training and tests sets, overfitting, and error rates. The course will also introduce a range of model based and algorithmic machine learning methods including regression, classification trees, Naive Bayes, and random forests. The course will cover the complete process of building prediction functions including data collection, feature creation, algorithms, and evaluation.

#### Review

This one was my favourite! In this course you will use the caret package another must.

The caret package is really useful for data spliting, pre-processing, feature selection and model tuning. This course was mainly taught by Roger D. Peng and he used a very practical approach that I really liked. This course covers different areas of machine learning and gives a foretaste of further area of study. Definitely worth it.

### Course 9: Developing Data Products

A data product is the production output from a statistical analysis. Data products automate complex analysis tasks or use technology to expand the utility of a data informed model, algorithm or inference. This course covers the basics of creating data products using Shiny, R packages, and interactive graphics. The course will focus on the statistical fundamentals of creating a data product that can be used to tell a story about data to a mass audience.

#### Review

Well it just repeats what was said in the reproducible research and for the project you have to realize an interactive dashboard using shiny and plotty. Well I’m a BI consultant so I like doing dashboard either with SSRS, PowerBI, Qlikview, Tableau but SHINY no more please!!! It took me several hours to do a horrible interactive dashboard instead of 2 minutes with a BI software. OK, I’m probably biased since I work in BI with not free tools.

I think Shiny is still good for an internal usage or in small scale or maybe for very specific dashboard that cannot be done with normal BI tools…

### Course 10: Capstone project

The capstone project class will allow students to create a usable/public data product that can be used to show your skills to potential employers. Projects will be drawn from real-world problems and will be conducted with industry, government, and academic partners.

#### Review

Again a big joke! You spent nearly 6 months learning different statistic method so you’re expected to work on a project that will combine all the different method you learnt. But no!! The project is about Natural Language Processing and there is actually no courses at all on this subject.

NLP is a very challenging and interesting topic but the fact the final project is actually not related on the 9 previous courses is really frustrating. Anyway at least it’s still an interesting challenge and it’ll really help you to develop your solving problem skills and expand your knowledge about NLP. I haven’t finished the Capstone project yet I actually missed the deadline and so far my laptop wasn’t powerful enough to run the different algorithms I’ve implemented. (I’ve got 8 gb RAM). Here is an introduction of the work I’ve done for the final capstone project Exploratory analysis of SwiftKey dataset

## Summary

The courses are mainly focused on teaching R and addressing some high level aspects of doing data science.

I don’t think these courses are intended for beginner in programming and ML especially the capstone project and inferential statistics course.

Also these courses are not good at all to get a good understating of statistics and to learn the different aspects of ML in detail.

The best part of these courses is that you’ll learn R throughout the whole specialization so if you don’t know R already and want to get exposed to ML in the meantime this MOOC might be right for you.

The Explanotary analysis and Machine Learning courses have really good content so if you already know R & R Markdown I’d definitely recommend to take those two courses and then skip the rest.

Finally if you haven’t been exposed to R and statistics before I’d highly recommend to learn the basic of R with the swirl package and build up your statistics knowledge with Fondation Data Analysis part 1&2 on EDX.

Linear algebra is not essential for these courses but it’ll help you to understand the more advanced concepts and math behind the different algorithms present in the Regression models course, and the PCA course and will also become essential if you want to delve into ML