Data Analysis with Python Peer Graded assignment Solution – Why Quiz

Project Scenario: Data Analysis with Python Peer Graded assignment Solution

In this assignment, you are a Data Analyst working at a Real Estate Investment Trust. The Trust would like to start investing in Residential real estate. You are tasked with determining the market price of a house given a set of features. You will analyze and predict housing prices using attributes or features such as square footage, number of bedrooms, number of floors, and so on. A template notebook is provided in the lab; your job is to complete the ten questions. Some hints to the questions are given in the template notebook.

Dataset Used in this Assignment

The dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015. It was taken from here. It was also slightly modified for the purposes of this course. 

For this project, you will utilize JupyterLab running on the Cloud in Skills Network Labs environment.

Notebook URL: Alternatively, you can work on your local machine or any other environment of choice, by downloading this link : Notebook link House Sales

Instructions: Data Analysis with Python Peer Graded assignment Solution

Here you are!

I hoped you enjoyed playing Data Scientist at a Real Estate Investment Trust. Well done!

This rubric will provide you with a grade breakdown for the evaluation of the final project of your peers.

This project is worth 13% of your final grade.

Data Analysis with Python Peer Graded assignment Solution

House Sales in King County, USA

This dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015.

id : A notation for a house

date: Date house was sold

price: Price is prediction target

bedrooms: Number of bedrooms

bathrooms: Number of bathrooms

sqft_living: Square footage of the home

sqft_lot: Square footage of the lot

floors :Total floors (levels) in house

waterfront :House which has a view to a waterfront

view: Has been viewed

condition :How good the condition is overall

grade: overall grade given to the housing unit, based on King County grading system

sqft_above : Square footage of house apart from basement

sqft_basement: Square footage of the basement

yr_built : Built Year

yr_renovated : Year when house was renovated

zipcode: Zip code

lat: Latitude coordinate

long: Longitude coordinate

sqft_living15 : Living room area in 2015(implies– some renovations) This might or might not have affected the lotsize area

sqft_lot15 : LotSize area in 2015(implies– some renovations)

You will require the following libraries:

In [1]:

import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler,PolynomialFeatures
from sklearn.linear_model import LinearRegression
%matplotlib inline

Module 1: Importing Data Sets

Load the csv:

In [2]:

file_name='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/coursera/project/kc_house_data_NaN.csv'
df=pd.read_csv(file_name)

We use the method head to display the first 5 columns of the dataframe.

In [3]:

df.head()

Out[3]:

Unnamed: 0iddatepricebedroomsbathroomssqft_livingsqft_lotfloorswaterfrontgradesqft_abovesqft_basementyr_builtyr_renovatedzipcodelatlongsqft_living15sqft_lot15
00712930052020141013T000000221900.03.01.00118056501.00711800195509817847.5112-122.25713405650
11641410019220141209T000000538000.03.02.25257072422.0072170400195119919812547.7210-122.31916907639
22563150040020150225T000000180000.02.01.00770100001.0067700193309802847.7379-122.23327208062
33248720087520141209T000000604000.04.03.00196050001.0071050910196509813647.5208-122.39313605000
44195440051020150218T000000510000.03.02.00168080801.00816800198709807447.6168-122.04518007503

5 rows × 22 columns

Question 1

Display the data types of each column using the attribute dtype, then take a screenshot and submit it, include your code in the image.

In [4]:

df.dtypes

Out[4]:

Unnamed: 0         int64
id                 int64
date              object
price            float64
bedrooms         float64
bathrooms        float64
sqft_living        int64
sqft_lot           int64
floors           float64
waterfront         int64
view               int64
condition          int64
grade              int64
sqft_above         int64
sqft_basement      int64
yr_built           int64
yr_renovated       int64
zipcode            int64
lat              float64
long             float64
sqft_living15      int64
sqft_lot15         int64
dtype: object

We use the method describe to obtain a statistical summary of the dataframe.

In [5]:

df.describe()

Out[5]:

Unnamed: 0idpricebedroomsbathroomssqft_livingsqft_lotfloorswaterfrontviewgradesqft_abovesqft_basementyr_builtyr_renovatedzipcodelatlongsqft_living15sqft_lot15
count21613.000002.161300e+042.161300e+0421600.00000021603.00000021613.0000002.161300e+0421613.00000021613.00000021613.00000021613.00000021613.00000021613.00000021613.00000021613.00000021613.00000021613.00000021613.00000021613.00000021613.000000
mean10806.000004.580302e+095.400881e+053.3728702.1157362079.8997361.510697e+041.4943090.0075420.2343037.6568731788.390691291.5090451971.00513684.40225898077.93980547.560053-122.2138961986.55249212768.455652
std6239.280022.876566e+093.671272e+050.9266570.768996918.4408974.142051e+040.5399890.0865170.7663181.175459828.090978442.57504329.373411401.67924053.5050260.1385640.140828685.39130427304.179631
min0.000001.000102e+067.500000e+041.0000000.500000290.0000005.200000e+021.0000000.0000000.0000001.000000290.0000000.0000001900.0000000.00000098001.00000047.155900-122.519000399.000000651.000000
25%5403.000002.123049e+093.219500e+053.0000001.7500001427.0000005.040000e+031.0000000.0000000.0000007.0000001190.0000000.0000001951.0000000.00000098033.00000047.471000-122.3280001490.0000005100.000000
50%10806.000003.904930e+094.500000e+053.0000002.2500001910.0000007.618000e+031.5000000.0000000.0000007.0000001560.0000000.0000001975.0000000.00000098065.00000047.571800-122.2300001840.0000007620.000000
75%16209.000007.308900e+096.450000e+054.0000002.5000002550.0000001.068800e+042.0000000.0000000.0000008.0000002210.000000560.0000001997.0000000.00000098118.00000047.678000-122.1250002360.00000010083.000000
max21612.000009.900000e+097.700000e+0633.0000008.00000013540.0000001.651359e+063.5000001.0000004.00000013.0000009410.0000004820.0000002015.0000002015.00000098199.00000047.777600-121.3150006210.000000871200.000000

8 rows × 21 columns

Module 2: Data Wrangling

Question 2

Drop the columns "id" and "Unnamed: 0" from axis 1 using the method drop(), then use the method describe() to obtain a statistical summary of the data. Take a screenshot and submit it, make sure the inplace parameter is set to True

In [6]:

df.drop("id", axis=1,inplace=True)
df.drop("Unnamed: 0", axis=1, inplace=True)

df.describe()

Out[6]:

pricebedroomsbathroomssqft_livingsqft_lotfloorswaterfrontviewconditiongradesqft_abovesqft_basementyr_builtyr_renovatedzipcodelatlongsqft_living15sqft_lot15
count2.161300e+0421600.00000021603.00000021613.0000002.161300e+0421613.00000021613.00000021613.00000021613.00000021613.00000021613.00000021613.00000021613.00000021613.00000021613.00000021613.00000021613.00000021613.00000021613.000000
mean5.400881e+053.3728702.1157362079.8997361.510697e+041.4943090.0075420.2343033.4094307.6568731788.390691291.5090451971.00513684.40225898077.93980547.560053-122.2138961986.55249212768.455652
std3.671272e+050.9266570.768996918.4408974.142051e+040.5399890.0865170.7663180.6507431.175459828.090978442.57504329.373411401.67924053.5050260.1385640.140828685.39130427304.179631
min7.500000e+041.0000000.500000290.0000005.200000e+021.0000000.0000000.0000001.0000001.000000290.0000000.0000001900.0000000.00000098001.00000047.155900-122.519000399.000000651.000000
25%3.219500e+053.0000001.7500001427.0000005.040000e+031.0000000.0000000.0000003.0000007.0000001190.0000000.0000001951.0000000.00000098033.00000047.471000-122.3280001490.0000005100.000000
50%4.500000e+053.0000002.2500001910.0000007.618000e+031.5000000.0000000.0000003.0000007.0000001560.0000000.0000001975.0000000.00000098065.00000047.571800-122.2300001840.0000007620.000000
75%6.450000e+054.0000002.5000002550.0000001.068800e+042.0000000.0000000.0000004.0000008.0000002210.000000560.0000001997.0000000.00000098118.00000047.678000-122.1250002360.00000010083.000000
max7.700000e+0633.0000008.00000013540.0000001.651359e+063.5000001.0000004.0000005.00000013.0000009410.0000004820.0000002015.0000002015.00000098199.00000047.777600-121.3150006210.000000871200.000000

We can see we have missing values for the columns  bedrooms and  bathrooms

In [7]:

print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum())
print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum())
number of NaN values for the column bedrooms : 13
number of NaN values for the column bathrooms : 10

We can replace the missing values of the column 'bedrooms' with the mean of the column 'bedrooms'  using the method replace(). Don’t forget to set the inplace parameter to True

In [8]:

mean=df['bedrooms'].mean()
df['bedrooms'].replace(np.nan,mean, inplace=True)

We also replace the missing values of the column 'bathrooms' with the mean of the column 'bathrooms'  using the method replace(). Don’t forget to set the  inplace  parameter top  True

In [9]:

mean=df['bathrooms'].mean()
df['bathrooms'].replace(np.nan,mean, inplace=True)

In [10]:

print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum())
print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum())
number of NaN values for the column bedrooms : 0
number of NaN values for the column bathrooms : 0

Module 3: Exploratory Data Analysis

Question 3

Use the method value_counts to count the number of houses with unique floor values, use the method .to_frame() to convert it to a dataframe.

In [11]:

df['floors'].value_counts().to_frame()

Out[11]:

floors
1.010680
2.08241
1.51910
3.0613
2.5161
3.58

Question 4

Use the function boxplot in the seaborn library to determine whether houses with a waterfront view or without a waterfront view have more price outliers.

In [13]:

sns.boxplot(x="waterfront", y="price", data=df)

Out[13]:

<matplotlib.axes._subplots.AxesSubplot at 0x7f456e280fd0>

Question 5

Use the function regplot in the seaborn library to determine if the feature sqft_above is negatively or positively correlated with price.

In [15]:

sns.regplot(x="sqft_above", y="price", data=df, ci=None)

Out[15]:

<matplotlib.axes._subplots.AxesSubplot at 0x7f456d1cd910>

We can use the Pandas method corr() to find the feature other than price that is most correlated with price.

In [16]:

df.corr()['price'].sort_values()

Out[16]:

zipcode         -0.053203
long             0.021626
condition        0.036362
yr_built         0.054012
sqft_lot15       0.082447
sqft_lot         0.089661
yr_renovated     0.126434
floors           0.256794
waterfront       0.266369
lat              0.307003
bedrooms         0.308797
sqft_basement    0.323816
view             0.397293
bathrooms        0.525738
sqft_living15    0.585379
sqft_above       0.605567
grade            0.667434
sqft_living      0.702035
price            1.000000
Name: price, dtype: float64

Module 4: Model Development

We can Fit a linear regression model using the longitude feature 'long' and caculate the R^2.

In [17]:

X = df[['long']]
Y = df['price']
lm = LinearRegression()
lm.fit(X,Y)
lm.score(X, Y)

Out[17]:

0.00046769430149007363

Question 6

Fit a linear regression model to predict the 'price' using the feature 'sqft_living' then calculate the R^2. Take a screenshot of your code and the value of the R^2.

In [18]:

X1 = df[['sqft_living']]
Y1 = df['price']
lm = LinearRegression()
lm
lm.fit(X1,Y1)
lm.score(X1, Y1)

Out[18]:

0.4928532179037931

Question 7

Fit a linear regression model to predict the 'price' using the list of features:

In [19]:

features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"]     

Then calculate the R^2. Take a screenshot of your code.

In [20]:

X2 = df[features]
Y2 = df['price']
lm.fit(X2,Y2)
lm.score(X2,Y2)

Out[20]:

0.657679183672129

This will help with Question 8

Create a list of tuples, the first element in the tuple contains the name of the estimator:

'scale'

'polynomial'

'model'

The second element in the tuple contains the model constructor

StandardScaler()

PolynomialFeatures(include_bias=False)

LinearRegression()

In [21]:

Input=[('scale',StandardScaler()),('polynomial', PolynomialFeatures(include_bias=False)),('model',LinearRegression())]

Question 8

Use the list to create a pipeline object to predict the ‘price’, fit the object using the features in the list features, and calculate the R^2.

In [22]:

pipe=Pipeline(Input)
pipe.fit(df[features],df['price'])
pipe.score(df[features],df['price'])

Out[22]:

0.7513408553309376

Module 5: Model Evaluation and Refinement

Import the necessary modules:

In [23]:

from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
print("done")
done

We will split the data into training and testing sets:

In [24]:

features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"]    
X = df[features]
Y = df['price']

x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.15, random_state=1)

print("number of test samples:", x_test.shape[0])
print("number of training samples:",x_train.shape[0])
number of test samples: 3242
number of training samples: 18371

Question 9

Create and fit a Ridge regression object using the training data, set the regularization parameter to 0.1, and calculate the R^2 using the test data.

In [25]:

from sklearn.linear_model import Ridge

In [26]:

RigeModel = Ridge(alpha=0.1) 
RigeModel.fit(x_train, y_train)
RigeModel.score(x_test, y_test)

Out[26]:

0.6478759163939122

Question 10

Perform a second order polynomial transform on both the training data and testing data. Create and fit a Ridge regression object using the training data, set the regularisation parameter to 0.1, and calculate the R^2 utilising the test data provided. Take a screenshot of your code and the R^2.

In [27]:

pr=PolynomialFeatures(degree=2)
x_train_pr=pr.fit_transform(x_train[features])
x_test_pr=pr.fit_transform(x_test[features])

RigeModel = Ridge(alpha=0.1) 
RigeModel.fit(x_train_pr, y_train)
RigeModel.score(x_test_pr, y_test)

Out[27]:

0.7002744279896707

Once you complete your notebook you will have to share it. Select the icon on the top right a marked in red in the image below, a dialogue box should open, and select the option all content excluding sensitive code cells.

share notebook

You can then share the notebook  via a  URL by scrolling down as shown in the following image:

HTML
Conclusion:

I hope this Data Analysis with Python Peer Graded assignment Solution would be useful for you to learn something new from this Course. If it helped you then don’t forget to bookmark our site for more Quiz Answers.

Enroll on Coursera

This course is intended for audiences of all experiences who are interested in learning about new skills in a business context; there are no prerequisite courses.

Keep Learning!

More Peer-graded Assignment Solutions >>

Peer-graded Assignment: LGBTQIA Inclusive Workplace Memo Solution

Honors Peer-graded Assignment: Advanced SQL for Data Engineers Solution

Solution of Peer-graded Assignment: Analyzing Historical Stock/Revenue Data and Building a Dashboard

Leave a Reply

Your email address will not be published. Required fields are marked *