Waiting For Darkness

The nights will soon be growing longer as the dog days of summer draw to a close and we slowly ease closer to Fall. Cooler nights, rain, and wind will become the norm. We retreat to the inner sanctum with the radio (podcasts) playing in the background. A hot cup of coffee, a dim light on above the workbench, trusty low powered laptops displaying simple terminals which facilitate exploring the possibilities of learning some new tips and tricks. No GUIs to distract into a mindless point and click wandering.

Cold dark dreary weather is perfect for perusing through some deep technical papers, thick computer books (yes, actual books made of paper and ink), or “help” files often neglected but often associated with our favorite coding language, IDE, or debugging programs. Like Alchemists searching for the great enlightenment where all pseudo-code and real code become one……

Great conclusions to be sought from toiling away the hours exploring the possibilities of perfection from the command line cursor…..

A perfect setting for exploring the new 1.0 release of Julia. You can go back to the announcement “Why We Created Julia” on the website and read a quick explanation on why the language was created. I hadn’t considered this language until I read the 1.0 release announcement posted on August 8a 2018. It wasn’t the possibility of the “speed of C” or the “Matlab like notation wandering. it was the possibility of using Julia as a “general programming language”. Sure we’ve got Python for that, but this is new to me, so it’s kind of cool (IMHO) to try something that is newer, yet somewhat familiar to learn about.

I won’t try to do a full description or review of the Julia language, because I haven’t used it enough yet. You can find a lot of useful info on their website, and download a version for your OS and start trying it out yourself. The bottom line is that you can use these dark dreary days and nights to enhance your old skills or learn some new ones. If reading boring technical manuals is your thing, then you’ll probably feel right at home reading every bit of documentation you can for whatever programming language you wish to work a. You can view the videos from “juliacon” on YouTube, there are some interesting ones from 2017 and 2018. There’s a large community, and a few “Julia Bloggers”, along with some very wandering. and specific tutorials, all available via

So pour yourself a hot cup of coffee, get out of the sun, and enjoy the distant sound of thunder as the rain begins to fall.

..and the wind begins to howl……


SolaceCoin: Helping Others One Block At A Time


SolaceCoin was created to help charities while also serving as a currency for daily usage.

How it works

6% of each block mined will be given the “Dev wallet”

30% of which will go to the charity wallet. the rest will be used to get listed on exchanges and support continuing development.

These coins will be slowly released to exchanges and the proceeds will be donated to a charity.
Charities are chosen by the community through voting.

Driven by CryptoNote: Anonymous transactions, no public ledger; used by other coins such as Monero

“SolaceCoin is a Crypto Currency waiting to make the world a better place one block at a time”


Algorithm: Cryptonight Heavy

Block Window Time: 60 Seconds

Total Supply: 21 Billion

Block reward +- 6000 as block reward varies

Using Camel distribution: Accelerates over years with the block reward changing every month.

Forked from the Ombre team


CPU and GPU Mining supported




Meltdown – How much will your CPU be affected?

Meltdown and Spectre were a hot topic this week for anyone who uses a computer. That would be a huge number of affected systems mainly because Intel chips were highlighted as having a design flaw (apparently for many years) that presents a “potential” Security weakness that could be capitalized on for some malicious activity. ARM and AMD chips also are identified as having the same or similar problems. The issue is how the CPU passes control to the kernel operations, then takes control back. Speculative execution is used to increase performance, but due to the CPU vulnerabilities, sensitive data may be exposed. Meltdown affects the isolation between applications and the operating system. This requires hardening the system against any exploits that could leverage these vulnerabilities. Anyways, it would be a good idea to update your operating systems security patches and antivirus.


Now as everyone scrambled to patch their operating systems (Windows, Mac, and Linux) the concern now is a possible performance hit due to the fix. Of course, how can you be certain that your system is affected by the vulnerability or the performance hit from the patching especially if you never check to see how your CPU/CPUs are currently performing.




In Windows 10 it’s pretty easy to open up Task Manager and select the Performance tab to see your CPU Base Speed, Utilization, and Speed. It’s quick and handy. You can play with the graph a little to see a summary view, Logical Processors. What you might want to check is if your system currently has the flaw issue.

Performance Monitor


If your familiar with Powershell you can do s little testing.

When you run Powershell as Administrator you would run a few commands:

Install-Module SpeculationControl


You may then need to run:

Set-ExecutionPolicy Bypass


If that runs through ok, then the actual check is done by running:



When you’re all done looking through the results don’t forget to set the execution policy back to “Restricted” by running:

Set-ExecutionPolicy Restricted



Run on my  old test machine ThinkPad (After checking my updates for KB4056892 OS Build 16299.192)


LINUX CPU check:


In Linux it’s also very easy to check on your CPU.

Here are a few quick command line checks you can do with SUDO:


# dmidecode -t 4


# watch grep \”cpu MHz\” /proc/cpuinfo

CPU speed

A lot more detail can be had with:


# less /proc/cpuinfo

To check your Linux system

cat /proc/cpuinfo

This will contain a line that states:

bugs     :cpu_insecure

if your kernel has KPTI patch


As usual, I would suggest keeping your distro’s security up to date with apt update and apt upgrade.

Will newer chips be the best solution – probably? I’m very interested in the AMD chips now, mostly because they might be less impacted. That’s still a little fuzzy, but if I do pick up a new pc anytime soon – I’ll take a serious look at the AMD chips. We shall see how all this pans out in the next few weeks – or months. For now, I think I’ll wait. So, keep an eye on the performance of your system and see if you notice any significant impact as new patches get rolled out. Usually, you want to get more performance out of your systems, not less.


I thought that I might give the latest Solus MATE a try while I was looking at writing a few words about terminal run file managers. I used it a while back with the Budgie desktop and liked it a lot. It was a great OS for my laptop, but I was spending most of my time using Windows 10, Manjaro, and Ubuntu for some work I was doing and I think I just got distracted and forgot about it. No complaints, just sometimes circumstances send you in odd directions that you hadn’t planned on and some things get left behind.

Anyhow, I was listening to the Late Night Linux podcast  the other day and remembered that I kind of liked that Solus “OS”. I probably shouldn’t refer Solus as a spin or Distro since I believe it’s built as an Independent development. (not directly based on Debian, Fedora, or Arch)  I think it’s very much an “Original” Linux Desktop. I’m sure there would be some disagreement on that, and that’s fine. I hadn’t intended to actually write a post about Solus, but somehow it’s starting to sound like I might be.

So the reason I installed Solus Mate was to install a few terminal based file managers that I have used over the years on other Linux systems. As I recall, Solus seems to allow me to get to work right away without any distractions. That still holds true in this latest installment. Since I usually use “Apt” on Ubuntu and “Pacman” on Manjaro to install packages from their respective repositories, I almost forgot that for Solus I had to use: “eopkg install ranger” to install the very handy Ranger terminal based file manager.  That is also true for “eopkg install mc” to install Midnight Commander.

If you prefer not to install packages from the command line, then Solus Software Center works well also.

I like the workflow Solus supports. I can get right to work in a terminal, easily install the packages I want to use – a simple file manager I can run from the terminal. Ranger works nicely, I prefer it to Midnight Commander. Midnight Commander also works well, and I can see how some would prefer it to Ranger. It kind of reminds me of Dosshell from the old DOS days. I’ll use “Glances” to compliment the file manager.

I was going to write a little about terminal file managers, but I think I’m getting distracted again….by how nice Solus works for me. I installed it, I used it, I like it.


Solus Software Center:


Update your system from the Software Center

software center update


MC (Midnight Commander)


Quick color scheme change (darkfar)


Ranger 1.8.1

Simple yet elegant design (very Vim – like)


Glances. (command line utility)

Nice compliment to any terminal based file manager.


Helpful Hints



Nigel’s performance Monitor for Linux (nmon)

Linux has many available tools that simply just work. That’s why I prefer it over other operating systems. I can get things done, usually faster and much simpler from a command line in a terminal, or multiple terminals.

Windows has useful and powerful shells, and I use them when I need to work on that OS, but I prefer to work in bash. I make use of simple, but elegant installation applications, most notably Apt and PacMan – depending on which of my two preferred distros I have installed. I can install simple, but powerful tools to pull out information about my system’s operation and performance.


One of my goto tools is nmon. (Nigel’s performance Monitor for Linux) – originally used by IBM and released as open source in 2009.

I wouldn’t categorize “nmon” as old school, but it sure has that vibe going on. Anything I can run in a terminal from a command line to see what is not always obvious is useful. The ability to export the data it gathers as .csv to use in a graph or for use in analyses applications is sometimes overlooked by casual users – but it’s available.

I don’t use the export function, but I expect that it might be beneficial in some situations. I mostly use nmon in the “Interactive mode” Just a quick glance at some of the output screens presents you with a lot of useful performance information.


From NMON (nmon -h) Hints:

For Data-Collect-Mode

-f            Must be the first option on the line (switches off interactive mode)

Saves data to a CSV Spreadsheet format .nmon file in the local directory

Note: -f sets a default -s300 -c288    which you can then modify

Further Data Collection Options:

-s <seconds>  time between data snapshots

-c <count>    of snapshots before exiting

-t            Includes Top Processes stats (-T also collects command arguments)

-x            Capacity Planning=15 min snapshots for 1 day. (nmon -ft -s 900 -c 96)



Although you can gather and present much data quickly,

don’t just glance at the data – really look at what it’s telling you, and if you don’t understand what you’re looking at – then look it up and find out what you may be missing.




— Toggles on/off to control what is displayed —

b   = Black and white mode (or use -b command line option)

c   = CPU Utilization stats with bar graphs (CPU core threads)

C   = CPU Utilization as above but concise wide view (up to 192 CPUs)

d   = Disk I/O Busy% & Graphs of Read and Write KB/s

D   = Disk I/O Numbers including Transfers, Average Block Size & Peaks (type: 0 to reset)

g   = User Defined Disk Groups            (assumes -g <file> when starting nmon)

G   = Change Disk stats (d) to just disks (assumes -g auto   when starting nmon)

h   = This help information

j   = File Systems including Journal File Systems

J   =  Reduces “j” output by removing unreal File Systems

k   = Kernel stats Run Queue, context-switch, fork, Load Average & Uptime

l   = Long term Total CPU (over 75 snapshots) via bar graphs

L   = Large and =Huge memory page stats

m   = Memory & Swap stats

M   = MHz for machines with variable frequency 1st=Threads 2nd=Cores 3=Graphs

n   = Network stats & errors (if no errors it disappears)

N   = NFS – Network File System

     1st NFS V2 & V3, 2nd=NFS4-Client & 3rd=NFS4-Server

o   = Disk I/O Map (one character per disk pixels showing how busy it is)

     Particularly good if you have 100’s of disks

q   = Quit

r   = Resources: Machine type, name, cache details & OS version & Distro + LPAR

t   = Top Processes: select the data & order 1=Basic, 3=Perf 4=Size 5=I/O=root only

u   = Top Process with command line details

U   = CPU utilization stats – all 10 Linux stats:

     user, user_nice, system, idle, iowait, irq, softirq, steal, guest, guest_nice

v   = Experimental Verbose mode – tries to make recommendations

V   = Virtual Memory stats



I find the info useful and sometimes will set up multiple terminals using Terminator.

You may find NMON useful, and a good tool to see what’s going on behind the scenes on your LINUX system.

%d bloggers like this: