Tech Enthusiast with a passion for teaching others new technology.

How To Select Multiple Columns Using Grep & R

Why you need to be using Grep when programming with R.

There’s a reason that grep is included in most if not all programming language to this day 44 years later from creation. It’s useful and simple to use. Below is an example of using grep to make selecting multiple columns in R simple and easy to read.

The dataset below has the following column names.

names(data) # Column Names
 [1] "fips"                 "state"                "county"               "metro_area"          
 [5] "population"           "med_hh_income"        "poverty_rate"         "population_lowaccess"
 [9] "lowincome_lowaccess"  "no_vehicle_lowaccess" "s_grocery"            "s_supermarket"       
[13] "s_convenience"        "s_specialty"          "s_farmers_market"     "r_fastfood"          
[17] "r_full_service"      

How can we select only the columns we need to work with?

  • metro_area
  • med_hh_income
  • poverty_rate
  • population_lowaccess
  • lowincome_lowaccess
  • no_vehicle_lowaccess
  • s_grocery
  • s_supermarket
  • s_convenience
  • s_specialty
  • s_farmers_market
  • r_fastfood
  • r_full_service

We can tell R exactly by listing each column as below

data[c("metro_area","med_hh_income", "poverty_rate", "population_lowaccess", "lowincome_lowaccess", "no_vehicle_lowaccess","s_grocery","s_supermarket","s_convenience","s_specialty","s_farmers_market", "r_fastfood", "r_full_service")]


We can tell R where each column we want is.


First, writing out each individual column is time consuming and chances are you’re going to make a typo (I did when writing it). Second option we have to first figure out where the columns are located to then tell R. Well looking at the columns we are trying to access vs the others theirs a specific difference. All these columns have a “_” located in there name, and we can use regular expressions (grep) to select these.

data[grep("_", names(data))])

FYI… to get the column locations you can actually use…

grep("_", names(data))
[1]  4  6  7  8  9 10 11 12 13 14 15 16 17

You will rarely have a regular expression as easy at “_” to select multiple columns, a very useful resource to learn and practice is https://regexr.com

Data was obtained from https://www.ers.usda.gov/data-products/food-access-research-atlas/download-the-data/

Creating Excel Workbooks with multiple sheets in R

Create Excel Workbooks

Generally, when doing anything in R I typically work with .csv files, their fast and straightforward to use. However, I find times, where I need to create a bunch of them to output and having to go and open each one individually, can be a pain for anyone. In this case, it’s much better to create a workbook where each of the .csv files you would have created will now be a separate sheet.

Below is a simple script I use frequently that gets the job done. Also included is the initial process of creating dummy data to outline the process.


Libraries used


Creating example files to work with

products <- c("Monitor", "Laptop", "Keyboards", "Mice")
Stock <- c(20,10,25,50)
Computer_Supplies <- cbind(products,Stock)
products <- c("Packs of Paper", "Staples")
Stock <- c(100,35)
Office_Supplies <- cbind(products,Stock)
# Write the files to our directory
write.csv(Computer_Supplies, "Data/ComputerSupplies.csv", row.names = FALSE)
write.csv(Office_Supplies, "Data/OfficeSupplies.csv", row.names = FALSE)

Point to directory your files are located in (.csv here) and read each in as a list

# Get the file name read in as a column
read_filename <- function(fname) {
  read_csv(fname, col_names = TRUE) %>%
    mutate(filename = fname)
tbl <-
  list.files(path = "Data/",
             pattern ="*.csv",
             full.names = TRUE) %>%

Removing path from the file names

*Note: Max length of a Workbook’s name is 31 characters

tbl$filename <- gsub("Data/", "", tbl$filename)
tbl$filename <- gsub(".csv", "", tbl$filename)

Split the “tbl” object into individual lists

 mylist <- tbl %>% split(.$filename)
## [1] "/ComputerSupplies" "/OfficeSupplies"

Creating an Excel workbook and having each CSV file be a separate sheet

wb <- createWorkbook()
lapply(seq_along(mylist), function(i){
  addWorksheet(wb=wb, sheetName = names(mylist[i]))
  writeData(wb, sheet = i, mylist[[i]][-length(mylist[[i]])])
#Save Workbook
saveWorkbook(wb, "test.xlsx", overwrite = TRUE

Reading in sheets from an Excel file

(The one we just created)

 df_ComputerSupplies <- read.xlsx("test.xlsx", sheet = 1)

Loading and adding a new sheet to an already existing Excel workbook

wb <- loadWorkbook("test.xlsx")
## [1] "/ComputerSupplies" "/OfficeSupplies"
addWorksheet(wb, "News Sheet Name")
## [1] "/ComputerSupplies" "/OfficeSupplies" "News Sheet Name"


The command sort is used to sort files line by line.  Lines starting with a number go first. Lines that come next in order go alphabetical with uppercase letters appearing before lowercase ones.

Use cat to create “testsort” for the example.

~/Test>cat testsort
A line 1
a line 2
8 line 3
line 4
5 line 5
~/Test>sort testsort
5 line 5
8 line 3
A line 1
a line 2
line 4

R sorts by using a random hash of keys

~/Test>sort -R testsort
a line 2
5 line 5
A line 1
8 line 3
line 4
~/Test>sort -R testsort
5 line 5
A line 1
a line 2
line 4
8 line 3

Egrep & Fgrep


            The Command egrep is the same as running grep –E. egrep is used to search for a pattern using extended regular expressions.

Terry@f:~/FinderDing>cat testsort
A line 1	
a line 2	
8 line 3	
line 4
5 line 5	
Terry@f:~/FinderDing>egrep '^[a-zA-Z]' testsort	
A line 1
a line 2
line 4

*Show lines that start with a letter from alphabet

Terry@f:~/FinderDing>cat html
<!DOCTYPE html>
<h1>My First Heading</h1>
<p>My first paragraph.</p>
Terry@f:~/FinderDing>egrep "My|first" html
<h1>My First Heading</h1>
<p>My first paragraph.</p>

`*Find lines with pattern My first from html file


The command fgrep is the same as running grep –F. The Command searches for fixed character strings in a file, which means regular expressions can’t be used.

Terry@f:~/FinderDing>fgrep "My" html
<h1>My First Heading</h1>
<p>My first paragraph.</p>l

The War of Automation

What happens when +90% of the jobs are not automated?

No longer will we need doctors or surgeons, instead hospitals will just need a single person to decide on moral choices. More of a supervisor that really their time will be spent monitoring the machines. Kids will have no idea humans would drive trucks or even personal vehicles. They’ll say “Why did you not care about safety and efficiency. Robots/Machines are better in every aspect.”

Side note: any movies like Terminator or iRobot will no longer be legal to watch

Population Control:

Concerns of overpopulation begin to be debated by the top 1%, with no longer having to work or work +40hrs a week births begin to increase. Wars are fought in the cyber world, no longer will militaries need boots on the ground as much rather than boots in chairs. Life expectancy is longer due to innovations in healthcare and community outreach.


While automation and robots can keep up with the large increase in population needing new houses and supplies, the environment cannot. National organizations begin to recommend new population control measures, 2 kids max, (unless medical reason, twins, triplets, etc…). Smog is no longer only a threat to a few cities but to all. Automation has created very efficient methods for animal slaughter for food and resources but to humans this is inhumane. Backlash begins against governments to constrain the way automation creates efficiency, what good is efficiency if robots discover humans can be efficiently farmed…



How close are we to this?