Data Science Using R

How to rename columns in r

Rename Columns | R

Often data you’re working with has abstract column names, such as (x1, x2, x3…). Typically, the first step I take when renaming columns with r is opening my web browser. 

For some reason no matter the amount of times doing this it’s just one of those things. (Hoping that writing about it will change that)

The dataset cars is data from the 1920s on “Speed and Stopping Distances of Cars”. There is only 2 columns shown below.

colnames(datasets::cars)
[1] "speed" "dist" 

If we wanted to rename the column “dist” to make it easier to know what the data is/means we can do so in a few different ways.

Using dplyr:

cars %>% 
  rename("Stopping Distance (ft)" = dist) %>% 
  colnames()

[1] "speed"             "Stopping Distance (ft)"
cars %>%
  rename("Stopping Distance (ft)" = dist, "Speed (mph)" = speed) %>%
  colnames()

[1] "Speed (mph)"            "Stopping Distance (ft)"

Using Base r:

colnames(cars)[2] <-"Stopping Distance (ft)"

[1] "speed"                  "Stopping Distance (ft)"

colnames(cars)[1:2] <-c("Speed (mph)","Stopping Distance (ft)")

[1] "Speed (mph)"            "Stopping Distance (ft)"

Using GREP:

colnames(cars)[grep("dist", colnames(cars))] <-"Stopping Distance (ft)"

"speed"                  "Stopping Distance (ft)"

How To Select Multiple Columns Using Grep & R

Why you need to be using Grep when programming with R.

There’s a reason that grep is included in most if not all programming language to this day 44 years later from creation. It’s useful and simple to use. Below is an example of using grep to make selecting multiple columns in R simple and easy to read.

The dataset below has the following column names.

names(data) # Column Names
 [1] "fips"                 "state"                "county"               "metro_area"          
 [5] "population"           "med_hh_income"        "poverty_rate"         "population_lowaccess"
 [9] "lowincome_lowaccess"  "no_vehicle_lowaccess" "s_grocery"            "s_supermarket"       
[13] "s_convenience"        "s_specialty"          "s_farmers_market"     "r_fastfood"          
[17] "r_full_service"      

How can we select only the columns we need to work with?

  • metro_area
  • med_hh_income
  • poverty_rate
  • population_lowaccess
  • lowincome_lowaccess
  • no_vehicle_lowaccess
  • s_grocery
  • s_supermarket
  • s_convenience
  • s_specialty
  • s_farmers_market
  • r_fastfood
  • r_full_service

We can tell R exactly by listing each column as below

data[c("metro_area","med_hh_income", "poverty_rate", "population_lowaccess", "lowincome_lowaccess", "no_vehicle_lowaccess","s_grocery","s_supermarket","s_convenience","s_specialty","s_farmers_market", "r_fastfood", "r_full_service")]

OR

We can tell R where each column we want is.

data[c(4,6,7:17)]

First, writing out each individual column is time consuming and chances are you’re going to make a typo (I did when writing it). Second option we have to first figure out where the columns are located to then tell R. Well looking at the columns we are trying to access vs the others theirs a specific difference. All these columns have a “_” located in there name, and we can use regular expressions (grep) to select these.

data[grep("_", names(data))])

FYI… to get the column locations you can actually use…

grep("_", names(data))
[1]  4  6  7  8  9 10 11 12 13 14 15 16 17

You will rarely have a regular expression as easy at “_” to select multiple columns, a very useful resource to learn and practice is https://regexr.com

Data was obtained from https://www.ers.usda.gov/data-products/food-access-research-atlas/download-the-data/

Creating Excel Workbooks with multiple sheets in R

Create Excel Workbooks

Generally, when doing anything in R I typically work with .csv files, their fast and straightforward to use. However, I find times, where I need to create a bunch of them to output and having to go and open each one individually, can be a pain for anyone. In this case, it’s much better to create a workbook where each of the .csv files you would have created will now be a separate sheet.



Below is a simple script I use frequently that gets the job done. Also included is the initial process of creating dummy data to outline the process.

EXAMPLE CODE:

Libraries used

library(tidyverse)
library(openxlsx)

Creating example files to work with

products <- c("Monitor", "Laptop", "Keyboards", "Mice")
Stock <- c(20,10,25,50)
Computer_Supplies <- cbind(products,Stock)
products <- c("Packs of Paper", "Staples")
Stock <- c(100,35)
Office_Supplies <- cbind(products,Stock)
# Write the files to our directory
write.csv(Computer_Supplies, "Data/ComputerSupplies.csv", row.names = FALSE)
write.csv(Office_Supplies, "Data/OfficeSupplies.csv", row.names = FALSE)

Point to directory your files are located in (.csv here) and read each in as a list

# Get the file name read in as a column
read_filename <- function(fname) {
  read_csv(fname, col_names = TRUE) %>%
    mutate(filename = fname)
}
tbl <-
  list.files(path = "Data/",
             pattern ="*.csv",
             full.names = TRUE) %>%
  map_df(~read_filename(.))

Removing path from the file names

*Note: Max length of a Workbook’s name is 31 characters

tbl$filename <- gsub("Data/", "", tbl$filename)
tbl$filename <- gsub(".csv", "", tbl$filename)

Split the “tbl” object into individual lists

 mylist <- tbl %>% split(.$filename)
names(mylist)
## [1] "/ComputerSupplies" "/OfficeSupplies"

Creating an Excel workbook and having each CSV file be a separate sheet

wb <- createWorkbook()
lapply(seq_along(mylist), function(i){
  addWorksheet(wb=wb, sheetName = names(mylist[i]))
  writeData(wb, sheet = i, mylist[[i]][-length(mylist[[i]])])
})
#Save Workbook
saveWorkbook(wb, "test.xlsx", overwrite = TRUE

Reading in sheets from an Excel file

(The one we just created)

 df_ComputerSupplies <- read.xlsx("test.xlsx", sheet = 1)

Loading and adding a new sheet to an already existing Excel workbook

wb <- loadWorkbook("test.xlsx")
names(wb)
## [1] "/ComputerSupplies" "/OfficeSupplies"
addWorksheet(wb, "News Sheet Name")
names(wb)
## [1] "/ComputerSupplies" "/OfficeSupplies" "News Sheet Name"

Exploring HR Employee Attrition and Performance with R

Based on IBM’s fictional data set created by their data scientists.

Introduction:
Employee Attrition is when an employee leaves a company due to normal means, (loss of customers, retirement, and resignation), and there is not someone to fill the vacancy. Can a company identify employee’s that are likely to leave a company?
A company with a high employee attrition rate is a good sign of underlying problems and can affect a company in a very negative way. One such way is the cost related to finding and training a replacement, as well as the possible strain it can put on other workers that in the meantime have to cover.

Preprocessing:
This dataset was produced by IBM and has just under 1500 observations of 31 different variables including attrition. 4 of the variables (EmployeeNumber, Over18, EmployeeCount, StandardHours) have the same value for all observations. Due to this, we can drop these since they won’t be helpful for our model. Next, the column “ï..Age” was renamed to “Age” to make calling this variable simpler. Finally, for build and testing models, the dataset was split into a training and test set at 70/30.

Initial Analysis:
Looking at the overall employee attrition rate for the entire dataset we can see it’s ~19%. Typically, a goal for a company is to keep this rate to ~10% and this dataset shows almost double that rate.

Here we show the influence of all factors on the employee attrition rate which shows the influence levels are similar. However, we can take the top factors and explore those in depth.

Top Factor Analysis Findings:

Factor Variable Importance
Total Working Years 0.6564557
Years At Company 0.6525268
Overtime 0.6505954
Years In Current Role 0.6480052
Monthly Income 0.6456590
Job Level 0.6414233

Total Working Years:

Looking at the total amount of years an employee has been in the workforce (at any job) there are two significant points to be found. First, in the initial 3 years of working, the data shows the attrition rate of 50%. This is expected as people tend to start at an entry-level job and get their first job experience before moving on. The rate drops off in the following amount of years until reaching 37 – 40 years in the job force. Here we have just under ~75% attrition rate which can be best explained as employees retiring since 37 years from 18 is 55 years old, the age people usually retire at.

Years at the company: 
The findings related to the number of years at the company and employee attrition followed the same trend as total working years did but with the rate lower for each. The reasoning behind this is most likely the same as total working years, with early on moving around. Then, staying put and finally retiring.

Overtime:
Employees that work overtime have over double the attrition rate (~25%), then those who don’t (~10). A possible reason behind this could be that some employees can get “burned out” working overtime. Possibly want to spend time outside of work and end up looking for a new job.

Monthly Income:
As expected employees with a higher monthly income were less likely to leave a company. Specifically, in the human resource and research and development departments. The sales department was interesting in that monthly income wasn’t as big a factor in attrition.

Model Building:

Gradient Boosting Model (GBM):
Using a GBM model with default parameters, the best training model came at 88%, at 150 trees. Using this model, we can create a prediction using the test data. The accuracy of this prediction was 87% which being very close to the training accuracy shows this is correct.

Interaction.depth n.Trees Accuracy Kappa
1 150 .878 .397

Classification Trees:
The classification tree built with default parameters showed a slightly lower overall accuracy. The training accuracy came to 82% and the prediction was 83%.

dt_model<- train(Attrition ~ ., data = attrition_train, method = "rpart")
cp Accuracy Kappa
0.039 .82 .24

When building a classification tree with only the top 5 factors, the accuracy fell in between the other two models at, 84% training and prediction.

dt_model1<- train(Attrition ~ TotalWorkingYears + YearsAtCompany + OverTime + YearsInCurrentRole + MonthlyIncome + JobLevel, data = attrition_train, method = "rpart")

cp Accuracy Kappa
0.0301 .84 .19

Recommendation:

As we can see from this data analysis, the biggest factor to employee attrition is the length of time in the workforce either at the same company or not. However, I would recommend looking deeper into employees that work overtime and getting their reasons for leaving. Possibly, have meetings with overtime workers and find out if they need help. For example, if they are working at their capacity and still having to work overtime then might be time and possibly even cheaper to hire extra help.
I would also recommend for the company to continue to collect this same type of data at an annual basis and run the models to find those employees that are more likely to leave. Once you have the list of employees, set up reviews and see if their’s a way to help them out or even you may catch, worker issues early on. Lastly, a further review into the sales department is warranted with the high attrition rate.

Introduction to Data Analysis with R

Using Basic Data Analysis functions on the mtcars dataset

Let’s Start

# Copying mtcars data frame to our new data frame myCars
myCars <- mtcars

Which car has the highest horsepower (hp) ? 

#find and display the car with the highest horsepower index <- which.max(myCars$hp)
# Display the car name along with the rest of the row myCars[index,]
##                mpg cyl disp hp drat  wt  qsec vs am  gear carb ## Maserati Bora  15   8  301 335 3.54 3.57 14.6  0  1    5    8

Maserati Bora has the highest horsepower at 335

Exploring miles per gallon (mpg) of the cars

# find and display the car with the highest mpg
index<-which.max(myCars$mpg)
myCars[index,]
##                 mpg cyl disp hp drat    wt qsec vs am gear carb ## Toyota Corolla 33.9   4 71.1 65 4.22 1.835 19.9  1  1    4    1
# Creating a sorted dataframe, based on mpg
highMPGcars <- myCars[ order(-myCars$mpg),]
head(highMPGcars)
mpg cyl  disp  hp drat    wt  qsec vs am gear carb ## Toyota Corolla 33.9   4  71.1  65 4.22 1.835 19.90  1  1    4    1
## Fiat 128       32.4   4  78.7  66 4.08 2.200 19.47  1  1    4    1 ## Honda Civic    30.4   4  75.7  52 4.93 1.615 18.52  1  1    4    2
## Lotus Europa   30.4   4  95.1 113 3.77 1.513 16.90  1  1    5    2 ## Fiat X1-9      27.3   4  79.0  66 4.08 1.935 18.90  1  1    4    1 ## Porsche 914-2  26.0   4 120.3  91 4.43 2.140 16.70  0  1    5    2

Which car has the “best” combination of mpg and hp?

# Best car combination of mpg and hp, where mpg and hp must be given equal # weight
bestCombo<- myCars$hp / myCars$mpg
myCars[which.max(bestCombo),]
##                mpg cyl disp  hp drat   wt qsec vs am gear carb ## Maserati Bora  15   8  301 335 3.54 3.57 14.6  0  1    5    8

The Maserati Bora hp to mpg is ~ 22hp per gallon 

%d bloggers like this: