Meltdown – How much will your CPU be affected?

Meltdown and Spectre were a hot topic this week for anyone who uses a computer. That would be a huge number of affected systems mainly because Intel chips were highlighted as having a design flaw (apparently for many years) that presents a “potential” Security weakness that could be capitalized on for some malicious activity. ARM and AMD chips also are identified as having the same or similar problems. The issue is how the CPU passes control to the kernel operations, then takes control back. Speculative execution is used to increase performance, but due to the CPU vulnerabilities, sensitive data may be exposed. Meltdown affects the isolation between applications and the operating system. This requires hardening the system against any exploits that could leverage these vulnerabilities. Anyways, it would be a good idea to update your operating systems security patches and antivirus.


Now as everyone scrambled to patch their operating systems (Windows, Mac, and Linux) the concern now is a possible performance hit due to the fix. Of course, how can you be certain that your system is affected by the vulnerability or the performance hit from the patching especially if you never check to see how your CPU/CPUs are currently performing.




In Windows 10 it’s pretty easy to open up Task Manager and select the Performance tab to see your CPU Base Speed, Utilization, and Speed. It’s quick and handy. You can play with the graph a little to see a summary view, Logical Processors. What you might want to check is if your system currently has the flaw issue.

Performance Monitor


If your familiar with Powershell you can do s little testing.

When you run Powershell as Administrator you would run a few commands:

Install-Module SpeculationControl


You may then need to run:

Set-ExecutionPolicy Bypass


If that runs through ok, then the actual check is done by running:



When you’re all done looking through the results don’t forget to set the execution policy back to “Restricted” by running:

Set-ExecutionPolicy Restricted



Run on my  old test machine ThinkPad (After checking my updates for KB4056892 OS Build 16299.192)


LINUX CPU check:


In Linux it’s also very easy to check on your CPU.

Here are a few quick command line checks you can do with SUDO:


# dmidecode -t 4


# watch grep \”cpu MHz\” /proc/cpuinfo

CPU speed

A lot more detail can be had with:


# less /proc/cpuinfo

To check your Linux system

cat /proc/cpuinfo

This will contain a line that states:

bugs     :cpu_insecure

if your kernel has KPTI patch


As usual, I would suggest keeping your distro’s security up to date with apt update and apt upgrade.

Will newer chips be the best solution – probably? I’m very interested in the AMD chips now, mostly because they might be less impacted. That’s still a little fuzzy, but if I do pick up a new pc anytime soon – I’ll take a serious look at the AMD chips. We shall see how all this pans out in the next few weeks – or months. For now, I think I’ll wait. So, keep an eye on the performance of your system and see if you notice any significant impact as new patches get rolled out. Usually, you want to get more performance out of your systems, not less.


End of Year

It’s been an interesting year

Security alerts, Cyber threats, Fake news, and more critical patch updates than you can shake a stick at. Robots are competing in the job market, A.I is looming, and cold air from the far north is sweeping across the land.  Whether it was a good year, a bad year, or nothing too impressive, it’s just about over now. The new year is coming fast. Are you smarter than you were a year ago at this time? Hopefully, the answer is yes if you put any effort into improving your skills, knowledge, and confidence level. Now you can set down some goals for next year. You can also reflect on your hopes for next year.

New Year Goals

One of my goals is to continue to improve and refine my command line skills for both Linux and Windows. It’s the same goal every year, but I know I can continue to improve as operating systems and applications are further developed …..which is inevitable if such technologies remain relevant. Amazing enough will be the number of outdated operating systems that remain in operation. Over complex and poorly understood system designs will continue to frustrate many. Buzzwords will be bantered about as opposing views on remote vs local support, maintenance, and storage are pondered. Computer hardware and software skills will still be required in many industries, but knowing more about the core business these components support will also be crucial. Sadly this is often overlooked. If you are reading this then you most likely already know this. Will this all change next year? Probably not. Will more people drop Windows and move over to Linux and OSX? Some might, but most won’t, so you’ll still need to sharpen your “Microsoft” skills unless you’re adamantly opposed to all things “Windows”. Good luck with that. Most of the generic questions and issues techs get presented with are based in the Windows realm.

Sometimes Windows and Linux techs working together enhance each other’s skill sets and often the customer or end-user is the beneficiary.  I guess that’s a plug for “work with others” it isn’t always as painful as you may expect.

If you can navigate your way through all the “Fake News” (technology related of course) and Silo building you may encounter, plus sharpen and learn new skills, you should do well next year.

You don’t need anyone to tell you how good you are at anything. You know how good you are. That’s the bottom line. Resist the urge to speak when often it’s more productive to listen. Learn from any and all mistakes you or others may make, and try not to stress out too often. Make next year a great year whoever you are or wherever you are. Practice, practice, practice, it might just pay off in ways you don’t foresee. You can always improve your skills. The trick is to avoid wasting time and energy on the technology you’ll  never use or won’t encounter often. Distractions are everywhere. Know your strengths and weaknesses, but also be realistic about your limitations. Try to get the most out of what you already have. Your “experience” is often an asset, but getting more out of less is a skill.  Re-read some of those tech manuals that are still relevant. My best advice though will be to take a break once in a while. Enjoy all the non-tech parts of life. Take time to unplug and recharge.


I thought that I might give the latest Solus MATE a try while I was looking at writing a few words about terminal run file managers. I used it a while back with the Budgie desktop and liked it a lot. It was a great OS for my laptop, but I was spending most of my time using Windows 10, Manjaro, and Ubuntu for some work I was doing and I think I just got distracted and forgot about it. No complaints, just sometimes circumstances send you in odd directions that you hadn’t planned on and some things get left behind.

Anyhow, I was listening to the Late Night Linux podcast  https://latenightlinux.com/  the other day and remembered that I kind of liked that Solus “OS”. I probably shouldn’t refer Solus as a spin or Distro since I believe it’s built as an Independent development. (not directly based on Debian, Fedora, or Arch)  I think it’s very much an “Original” Linux Desktop. I’m sure there would be some disagreement on that, and that’s fine. I hadn’t intended to actually write a post about Solus, but somehow it’s starting to sound like I might be.

So the reason I installed Solus Mate was to install a few terminal based file managers that I have used over the years on other Linux systems. As I recall, Solus seems to allow me to get to work right away without any distractions. That still holds true in this latest installment. Since I usually use “Apt” on Ubuntu and “Pacman” on Manjaro to install packages from their respective repositories, I almost forgot that for Solus I had to use: “eopkg install ranger” to install the very handy Ranger terminal based file manager.  That is also true for “eopkg install mc” to install Midnight Commander.

If you prefer not to install packages from the command line, then Solus Software Center works well also.

I like the workflow Solus supports. I can get right to work in a terminal, easily install the packages I want to use – a simple file manager I can run from the terminal. Ranger works nicely, I prefer it to Midnight Commander. Midnight Commander also works well, and I can see how some would prefer it to Ranger. It kind of reminds me of Dosshell from the old DOS days. I’ll use “Glances” to compliment the file manager.

I was going to write a little about terminal file managers, but I think I’m getting distracted again….by how nice Solus works for me. I installed it, I used it, I like it.


Solus Software Center:


Update your system from the Software Center

software center update


MC (Midnight Commander)


Quick color scheme change (darkfar)


Ranger 1.8.1

Simple yet elegant design (very Vim – like)


Glances. (command line utility)

Nice compliment to any terminal based file manager.


Helpful Hints



Nigel’s performance Monitor for Linux (nmon)

Linux has many available tools that simply just work. That’s why I prefer it over other operating systems. I can get things done, usually faster and much simpler from a command line in a terminal, or multiple terminals.

Windows has useful and powerful shells, and I use them when I need to work on that OS, but I prefer to work in bash. I make use of simple, but elegant installation applications, most notably Apt and PacMan – depending on which of my two preferred distros I have installed. I can install simple, but powerful tools to pull out information about my system’s operation and performance.


One of my goto tools is nmon. (Nigel’s performance Monitor for Linux) – originally used by IBM and released as open source in 2009.

I wouldn’t categorize “nmon” as old school, but it sure has that vibe going on. Anything I can run in a terminal from a command line to see what is not always obvious is useful. The ability to export the data it gathers as .csv to use in a graph or for use in analyses applications is sometimes overlooked by casual users – but it’s available.

I don’t use the export function, but I expect that it might be beneficial in some situations. I mostly use nmon in the “Interactive mode” Just a quick glance at some of the output screens presents you with a lot of useful performance information.


From NMON (nmon -h) Hints:

For Data-Collect-Mode

-f            Must be the first option on the line (switches off interactive mode)

Saves data to a CSV Spreadsheet format .nmon file in the local directory

Note: -f sets a default -s300 -c288    which you can then modify

Further Data Collection Options:

-s <seconds>  time between data snapshots

-c <count>    of snapshots before exiting

-t            Includes Top Processes stats (-T also collects command arguments)

-x            Capacity Planning=15 min snapshots for 1 day. (nmon -ft -s 900 -c 96)



Although you can gather and present much data quickly,

don’t just glance at the data – really look at what it’s telling you, and if you don’t understand what you’re looking at – then look it up and find out what you may be missing.




— Toggles on/off to control what is displayed —

b   = Black and white mode (or use -b command line option)

c   = CPU Utilization stats with bar graphs (CPU core threads)

C   = CPU Utilization as above but concise wide view (up to 192 CPUs)

d   = Disk I/O Busy% & Graphs of Read and Write KB/s

D   = Disk I/O Numbers including Transfers, Average Block Size & Peaks (type: 0 to reset)

g   = User Defined Disk Groups            (assumes -g <file> when starting nmon)

G   = Change Disk stats (d) to just disks (assumes -g auto   when starting nmon)

h   = This help information

j   = File Systems including Journal File Systems

J   =  Reduces “j” output by removing unreal File Systems

k   = Kernel stats Run Queue, context-switch, fork, Load Average & Uptime

l   = Long term Total CPU (over 75 snapshots) via bar graphs

L   = Large and =Huge memory page stats

m   = Memory & Swap stats

M   = MHz for machines with variable frequency 1st=Threads 2nd=Cores 3=Graphs

n   = Network stats & errors (if no errors it disappears)

N   = NFS – Network File System

     1st NFS V2 & V3, 2nd=NFS4-Client & 3rd=NFS4-Server

o   = Disk I/O Map (one character per disk pixels showing how busy it is)

     Particularly good if you have 100’s of disks

q   = Quit

r   = Resources: Machine type, name, cache details & OS version & Distro + LPAR

t   = Top Processes: select the data & order 1=Basic, 3=Perf 4=Size 5=I/O=root only

u   = Top Process with command line details

U   = CPU utilization stats – all 10 Linux stats:

     user, user_nice, system, idle, iowait, irq, softirq, steal, guest, guest_nice

v   = Experimental Verbose mode – tries to make recommendations

V   = Virtual Memory stats



I find the info useful and sometimes will set up multiple terminals using Terminator.

You may find NMON useful, and a good tool to see what’s going on behind the scenes on your LINUX system.

Resource monitor

Linux type solutions on Windows

The “Microsoft” Ubuntu command line “App”

Sometimes you find yourself working on a Windows-based machine, have to perform a quick task that might normally be very straightforward on a LINUX machine, and you don’t want to spend too much time fumbling through the GUI pointing and clicking until you stumble across what you’re looking for.
My first gut instinct usually is to get to the command line quickly and work from there.
I installed the “Microsoft” Ubuntu command line “App” on a Windows 10 test machine recently hoping it would become the perfect solution to such situations, but it has been a disappointing experience so far. Maybe I’ll have a better opinion after I spend more time with it, but I doubt I’ll see this installed on too many machines I run into – at least for a while. Luckily Windows command line tools are pretty robust – especially PowerShell.

As a side note, I really find Sysinternals very useful as a nice set of tools for some very interesting challenges you might run into while doing some diagnostic investigation on problematic systems.
For the most part, Windows usually has some nice built-in diagnostic tools – if you know where to look and can find them in a timely manner.

I had a question come up recently about port connections. Usually, I suggest Wireshark as a go-to tool for any supporting operating system, but this situation was for a system that did not have it available – disregarding the fact that you might or might not have the ability to monitor the packet traffic with a tap or port forwarding access via a test laptop. The discussion was on using the computer in question for any diagnostics – and of course, it wasn’t a LINUX machine. Anyhow, here’s a few ideas I floated for such situations – try a few if you haven’t already and see what you think.

Finding port connections in Windows

Use Wireshark to parse out port numbers by adding Destination and Source port columns for both TCP and UDP port numbers. Under each corresponding Wireshark header right-click on destination/source and apply as a column. Edit the name to differentiate between UDP or TCP.
Filtering for a specific protocol number is just as straightforward.
tcp.dstport == 80
tcp.srcport == 80




If you’re a fan of Sysinternals you may find TCPView a nice alternative to monitor your PC’s connections – close to a real-time view of connections made and unmade.





Either from the basic Windows CMD or from within Powershell “netstat” similar to its LINUX long-lost cousin is a simple and quick way to view connection status.






While you’re working in Powershell, check out network connections with Get-NetTCPConnection.





If your set on using the GUI on Windows, the “Resource Monitor” Network – Listening Ports works quite well.

Resource monitor






Back on Linux, you can run netstat, lsof, ss, or nmap.
Each has many options and allow you to customize your scans.
Here a few quick checks:

netstat -a | grep CONNECTED


lsof -i


ss | less

You could always load nmap and scan your own local ports as an alternative port check.