Turing In-Complete (part 1)

Before man-built machines that could be used to manually calculate all the same mathematical problems we now regard as computation, we – humans were regarded as the “computers”, not the artificial machines. This explains the label “manually” calculated. Man built the machines. This has only been true for a relatively short period of time when compared to the timeline man has existed in the current evolutionary state.

This technology goes back much farther than the existence of our most popular desktop pc, laptops, tablets, or smartphones. Major developments in the twentieth century progressed at a very rapid pace, not with the help of Extraterrestrial beings, but by some very brilliant humans. Maybe you could make a case for “math” from outer space in ancient history, and you’d be technically close when you factor in the influence of the orbit of planets and positions of stars that inspired the desire to figure out what was seen in the skies.

The abacus was the first device currently known for crunching numbers. The Sumerian abacus is thousands of years old and noted throughout ancient history. This isn’t what I would regard as an early computer, but it was and still is an impressive design.

The Analytical Engine an improvement over the Difference Engine – both designed by Charles Babbage in the early 1800s could be considered the foundation of modern computing. Ada King, countess of Lovelace created the first computer program for the Analytical Engine – if it had been completed. The design was, but not the fully functional machine. So the idea or design for the device came before the actual machine – as did a program that could have run on the machine.

I always felt that this part of history was a bit murky, but within the fog, there was a spark. The point is that this was a starting point that others could build upon.

Could the Analytical Engine be categorized as the first Turing Complete machine?

If we consider all modern programming languages Turing-compatible, then could it have run a program that would solve any calculation initially performed manually? In theory – possibly, in practical application, I am skeptical.

To consider the current concern about Artificial Intelligence taking over every aspect of man’s future in both positive and negative light, you should look back through its short history of advancements. Computers have come a long way not fully envisioned by the early creators, but it is still a very short time compared to man’s intellectual development.

Turing Completeness requires a system to make a decision based on data manipulated by rule sets. Remember those “If”, “and”, “goto” statements from BASIC (Beginner’s All-Purpose Symbolic Instruction Code). Maybe you remember (90s version) QBasic. If you don’t, no problem. Just know that there was some amazing progress in computer development from the 1950s and 1960s that used instructions which could be considered Turing-Complete -theoretically – not always in practice. This may not be the best way to explain this, but I think I’m in the ballpark.

I’m not disregarding Turing’s calculating machine design of from the ’30s, but things started to ramp up in the ’50s.

Consider the fact that we still use Fortran and LISP programming, both from the 1950s. Yes, I should mention assembly language which dates back to the late ’40s.

You can look back at the Rand Corporation’s Math-Matic AT-3 from 1957 used as a compiler and programming language for the Univac 1. Charles Katz led a team tasked with developing “Math-Matic” programming language under the direction of Grace Hopper who was notable in the movement towards “machine-independent” programming languages which helped lead to the development of high-level programming languages.

This was all done in the 1950-1960s. This is the era of Big computers like the DATATRON 200 series weighing in at over 3000 lbs. Big computers working with word size 10 decimal digits. All this incredibly amazing computer development which would later lead to the machines we now fear. It’s amazing to think we would later spin up the development of AI – which initially required the development of sophisticated computer code which came from these early systems. The history of computers and programming languages is very interesting and usually not referenced enough when we look at our current state of affairs with how much we depend on them. Man built these with the intent to improve his condition, and in most cases they have. What may be getting lost through time is the appreciation of all those who contributed over the last two centuries to the existence and development of all this amazing technology. It continues today, and it still requires some very brilliant minds to continue the advancement for the good of man. This is just the beginning. We are still in the early stages of computing, and we are still the computers.

Creating Excel Workbooks with multiple sheets in R

Create Excel Workbooks

Generally, when doing anything in R I typically work with .csv files, their fast and straightforward to use. However, I find times, where I need to create a bunch of them to output and having to go and open each one individually, can be a pain for anyone. In this case, it’s much better to create a workbook where each of the .csv files you would have created will now be a separate sheet.



Below is a simple script I use frequently that gets the job done. Also included is the initial process of creating dummy data to outline the process.

EXAMPLE CODE:

Libraries used

library(tidyverse)
library(openxlsx)

Creating example files to work with

products <- c("Monitor", "Laptop", "Keyboards", "Mice")
Stock <- c(20,10,25,50)
Computer_Supplies <- cbind(products,Stock)
products <- c("Packs of Paper", "Staples")
Stock <- c(100,35)
Office_Supplies <- cbind(products,Stock)
# Write the files to our directory
write.csv(Computer_Supplies, "Data/ComputerSupplies.csv", row.names = FALSE)
write.csv(Office_Supplies, "Data/OfficeSupplies.csv", row.names = FALSE)

Point to directory your files are located in (.csv here) and read each in as a list

# Get the file name read in as a column
read_filename <- function(fname) {
  read_csv(fname, col_names = TRUE) %>%
    mutate(filename = fname)
}
tbl <-
  list.files(path = "Data/",
             pattern ="*.csv",
             full.names = TRUE) %>%
  map_df(~read_filename(.))

Removing path from the file names

*Note: Max length of a Workbook’s name is 31 characters

tbl$filename <- gsub("Data/", "", tbl$filename)
tbl$filename <- gsub(".csv", "", tbl$filename)

Split the “tbl” object into individual lists

 mylist <- tbl %>% split(.$filename)
names(mylist)
## [1] "/ComputerSupplies" "/OfficeSupplies"

Creating an Excel workbook and having each CSV file be a separate sheet

wb <- createWorkbook()
lapply(seq_along(mylist), function(i){
  addWorksheet(wb=wb, sheetName = names(mylist[i]))
  writeData(wb, sheet = i, mylist[[i]][-length(mylist[[i]])])
})
#Save Workbook
saveWorkbook(wb, "test.xlsx", overwrite = TRUE

Reading in sheets from an Excel file

(The one we just created)

 df_ComputerSupplies <- read.xlsx("test.xlsx", sheet = 1)

Loading and adding a new sheet to an already existing Excel workbook

wb <- loadWorkbook("test.xlsx")
names(wb)
## [1] "/ComputerSupplies" "/OfficeSupplies"
addWorksheet(wb, "News Sheet Name")
names(wb)
## [1] "/ComputerSupplies" "/OfficeSupplies" "News Sheet Name"

Sort

The command sort is used to sort files line by line.  Lines starting with a number go first. Lines that come next in order go alphabetical with uppercase letters appearing before lowercase ones.

Use cat to create “testsort” for the example.

~/Test>cat testsort
A line 1
a line 2
8 line 3
line 4
5 line 5
~/Test>sort testsort
5 line 5
8 line 3
A line 1
a line 2
line 4

R sorts by using a random hash of keys

~/Test>sort -R testsort
a line 2
5 line 5
A line 1
8 line 3
line 4
~/Test>sort -R testsort
5 line 5
A line 1
a line 2
line 4
8 line 3

Egrep & Fgrep

EGREP:

            The Command egrep is the same as running grep –E. egrep is used to search for a pattern using extended regular expressions.

Terry@f:~/FinderDing>cat testsort
A line 1	
a line 2	
8 line 3	
line 4
5 line 5	
Terry@f:~/FinderDing>egrep '^[a-zA-Z]' testsort	
A line 1
a line 2
line 4

*Show lines that start with a letter from alphabet

Terry@f:~/FinderDing>cat html
<!DOCTYPE html>
<html>	
<body>
<h1>My First Heading</h1>
<p>My first paragraph.</p>
</body>
</html>
Terry@f:~/FinderDing>egrep "My|first" html
<h1>My First Heading</h1>
<p>My first paragraph.</p>

`*Find lines with pattern My first from html file

FGREP:

The command fgrep is the same as running grep –F. The Command searches for fixed character strings in a file, which means regular expressions can’t be used.

Terry@f:~/FinderDing>fgrep "My" html
<h1>My First Heading</h1>
<p>My first paragraph.</p>l

Lemon-Lime on Ice

Lemon-Lime Gatorade and crushed ice is (almost, coffee still #1) my new favorite beverage. I know it seems wrong, but it’s pretty nice on hot humid days – especially when I’m waiting for Windows 10 Update Assistant to finish upgrading Windows 10 on my Lenovo ThinkPad. So slow, I know I should be patient, so I finish some other work I’m chipping away at on another Linux laptop. I downloaded the latest Win 10 64 bit iso also just in case I have to build from scratch, but I wanted to have my current configuration updated by the Update Assistant so I could experience the method many users would choose.
Thus, the cold beverage on such a hot day. (Which I just spilled)
This entire exercise started this morning as a plan to play around with the new Windows Command line. The iso I had on hand was version 1809. That’s how I installed Windows 10 back on my Linux test laptop.
Once I had Windows up and running I tried to download the new Command Line “Test preview” from the Windows Store I discovered I needed the latest Windows May 2019 build. That seemed appropriate – thus my long drawn out process trying to build a bootable usb.
This does take a relatively long time for each step.
I wanted to use the Windows media creation tool to make a bootable usb like my previous 1809 build.
Unfortunately I was greeted with error code 0x80042405-0xA001B.
I reformatted and partitioned a blank usb but then I gave up after a few unsuccessful attempted and used Rufus to build a bootable usb with the iso image I downloaded from the Microsoft Download Windows 10 page.
That was fairly easy and worked very well. I built the usb, but I ended up using the Update Assistant – saving the usb for future testing.

Now when I hit the Windows key & R to bring up the “Run” text field, enter “winver”  I see that I am running version 1903. This version was made widely available earlier this summer, but I didn’t feel the need to upgrade from 1809 at that time. Now it’s probably a good time to get familiar with all the changes. I’m seeing an uptick in users upgrade from Windows 7 ( a little late for some, but right on time for others). There will always be those who wait to the 11th hour, so I expect the end of the year will be exciting. I’m sure retail sales will attempt to capitalize on the last minute stragglers.

I’m going to focus on using the new Windows build as my daily driver for a while. So far, I like the new updated look.

The “updated” command line terminal has decent color support, but it still looks like the old command line terminal.

It was not as impressive as I had hoped, but for the most part everything seems to be working fine. I’m not having any issue doing basic tasks, and Ill be using Windows Subsystem for Linux and Powershell.

 

Consolidated command line did allow me to upgrade my Linux Distro fairly easy. Managing the Distro from the command line is a nice option and I’m sure Ill use it more in the future.

 

Everything feels very “new” and smooth. I don’t see a huge learning curve that would intimidate users moving up from Windows 7, so I would expect more positive than negative experiences from users just being introduced to a “necessary update” This is of course if 1903 is the version most users upgrade to.

I can also see how duel booting Linux or using a Virtual machine would no longer be necessary for me. The Linux support appears to do everything I would want from a linux terminal, and a GUI isn’t a deal breaker. I would like to see where Microsoft takes the Linux support in the future. I can work with this version of Windows 10. My laptop actually feels faster and more responsive with just the update.

Even the Windows Security looks like it got a tuneup. Maybe I don’t need a supplemental Antivirus. Before I get too carried away remember this is all just a test. In the end I’m sure I’ll revert back to MX Linux or just my Chromebook as my long term daily driver – but you never know. This is a very impressive first take.

I always seem to get dragged back into the Windows world for one reason or another. This version is not bad, I can see how a lot of users will fall in love with it.

On a side note, I knocked over my glass of ice cold Gatorade and almost ruined my laptop. I’m not even sure how I did it. I would have stopped writing right about now. Luckily all is well and my laptop is still operating nicely with Windows 10. 

The next issue to deal with will be application compatibility. If your a Windows user who relied on Virtual XP for some older software, than maybe it’s time to upgrade your software.  If you need to backup all your files before you do an upgrade from Windows 7, than a external drive or Microsoft’s “OneDrive” would be a possible solution. Upgrading from Windows 10 1809 to 1903 was not an issue for what few files I had on this laptop. If you have something critical, than backing up is usually worth the effort.

I think Ill enjoy using this laptop as my daily driver running Windows 10 1903. Ill have to do an update in a few weeks to let you all know how things go. This is only a test.

 

 

 

%d bloggers like this: