Monday, September 3, 2018

My Book

If I were going to write a book in 2018 it would be about microservices and containerization and patterns for success with them. I foresee about 30 chapters and north of 800 pages. Daunting, but someone has to write it.

Tuesday, August 14, 2018

New MacBook Pro

So they gave me a new MacBook Pro

It's been a disaster. First off, it's got a 13 inch screen. That means my screen protector won't fit and needs to be cut down to size. I'll get there eventually, IF I decide to keep the thing.

It's not substantially better than my 3 year old Mac for the stuff I do, and in some ways it's just more difficult.

I make videos, and I record music. I write software and I create demos. I have nearly 450 GB of stuff on my old machine and I doubt they'll let me keep both of them.

We're using CrashPlan as a backup system.  And I think it's swell if you only need to restore file or folder here or there. But we're talking about migrating everything to a new machine. Well, that ain't gonna work. CrashPlan tells me something like 53 days to get stuff moved.

I could go on and on. Eight Do-Overs, wiping the disk and starting over. Clashing usernames. Backup restores overwriting certificates.  And did I mention 53 days to restore from CrashPlan?

Happily after I learned that the new mac didn't have enough cores to run VMWare the way we have it set up, the company replaced the 13 inch with a 15 inch Pro, which has 6 physical core = 12 virtual. So This should be fine.

Here's my new plan. Migrate Nothing. Just install applications and when I need a document from the old machine I'll airdrop it over to the new one.  Use AirDrop to move files over when I need them. If I don't need them in 30 days, I won't migrate them at all.

That said, there are a ton of applications to install.
x KeepassX - copy the databases and keys... should I leave these in the cloud? Just the databases, not the keys.  And they can all be copied to the new machine using AirDrop. First things first.
x Firefox - Setup sync
x Chrome - Login, Sync, GMail - Sync didn't work during one of the tries and actually deleted bookmarks on the old machine. Luckily I had a backup.
x MS-Office

x Reaper - where did I put that key file again?
x Sublime Text - same... ok, I remember where the key file is.
x node.js -
x java 8 -
x eclipse - and the node editing tools, but which node editing tools?
 - I might skip eclipse. I have a hunch that Sublime can do almost everything I want.
 - nah... I went ahead and downloaded eclipse. I had to work to hard to make a jar file doing it by hand, which only goes to demonstrate my sadly verdigrised skills.
x docker and kubernetes and all that related stuff for ICP
x  gotta have brew before we can have kubectl
all the BlueMix CLI stuff - that can wait, we never use it.
x git
x Ah Camtasia, SnagIt and VMWare
 - and then the two humongous VMWare Images that I normally use, those will be AirDropped.

node-red? do we still care?
How about ODM?

x WebEx Plugin - just start webex and let it do its thing.
x Apera Client
x MuseScore
x all the VSTs and their licensing vaults
 and iLok for those SoundToys VSTs
x HandBrake to compress the video files that Camtasia creates
x Gimp
x Box Tools

Print drivers? Those just seemed to work.

---

I think the thing that really snagged me was how many documents I had stored locally, and how many of those really need to be in the cloud somewhere. But then what happens when the cloud goes down? If my google password is in a vault on the google cloud and I can't get to it ... well there's a snake eating its own tail.

---

Also, the AirDrop strategy has failed me. I copied by pictures from the old machine to the new machine. The Pictures folder has special behavior apparently, and I broke it. So I have to figure that out.

The good folks at Mac Rumors pointed me to the Migration Assistant. We'll see if that helps.

---
The Migration Assistant does copy your files, if you don't have too many of them, and it apparently copies them to a new userID, which doesn't do me any good. Since it put a ton of files somewhere I cannot access them I've decided to wipe the disk again and start over.

This time however, I'm not going to encrypt the disk, since corporate policy installs yet another disk encryptor on top of what I've installed. They don't tell you this in the how to install your mac forum. Just learning by doing unimportant stuff.






Tuesday, June 5, 2018

Tangzhong


Tangzhong method of baking.

I'm going to have to look into this more.

Tuesday, May 29, 2018

Docker Cruft

I've been working on a docker project this weekend. I toy around with docker, and sometimes I have to expound its benefits to customers.

So I have this pet project. I call it subtitler and subtitler-ui. One is a node-red implementation of a file streamer that talks to Watson and gets back speech to text results and formats it into either SRT or VTT format. The second is a UI to drive the backend node-red stuff.

Over the weekend I wanted to dockerize them.

That wasn't too bad. But there were dozens of builds and false starts. I was actually trying to do something that is considered a bad practice in Docker, which is to have both processes running in a single container. I didn't get very far with that, so I left it as two containers.

What amazed me this morning when I went to push to docker hub is how much cruft there was in my docker system.  Dozens and dozens of image files.

There must be a simple way to manage these and clear them out. Do I even need them? I don't know. I should read up more I guess.

Tuesday, December 12, 2017

Docker for Everyone

Why Docker?

So back when I was a kid, back in the 90's we used to install our software applications directly onto the iron. That is, you'd get some hardware. You'd install the OS. You'd install the App Server and Database, and then you'd install your application into the app server and configure the app server to talk to the database. Installing the application was fairly straight forward. For JEE applications you just copied your EAR or WAR or JAR file into a folder and that was kind of that.

The downside of this, is if you wanted to run on more than one server, you had to do all of this stuff a second, and a third and an n'th time... depending on how many servers you wanted.

The other downside of this is that you could have a big expensive piece of iron that only used 5% of its capacity for this one program.

To recap:
 - Setting up an environment took days if that's all you were working on.
 - Deploying apps could be as simple as copying a file to that environment, and restarting the servers.
 - Cloning the environment for redeployment was arduous and error prone.
 - CI and CD tooling had not been invented yet, but would have been really tough.

--
So, along came virtualization technologies (unless you are an IBMer and had been using virtualization on the mainframe since the 1960s).  And this solved the first problem because now all you had to do was copy these VM files to a new machine and it just worked. So much faster than installing all the app servers and databases and whatnot.

The other thing VMs did was that you could run multiple VMs on a single hardware box. So you could get better utilization of the hardware. That meant smaller data centers, less power requirements. Good stuff.

 - Setting up an environment was easy because you could copy a preexisting template.
 - Deploying apps could be as simple as copying a file to that environment, and restarting the servers.
 - Cloning the environment was easier, although there could be complications with fixed resources. But more importantly environments could be very large, moving them required lots of storage and lots of network bandwidth.
 - CI and CD tooling was just coming into play, being able to spin up dev/test environments on the fly became a thing.

--
Then in 2006 AWS said, "Hey, there's this cloud we've built from our leftover computing resources does anyone want to use it?"  And the world said, "YES".

The dynamics of the cloud changed a lot of our programming paradigms. Think about this. In the "old" days if you wanted set up a new app environment, maybe all you had to do was copy the VM to a new server. But VMs tend to be HUGE.  It's one thing to copy them across a corporate network. But if you ever tried to copy a VM up to the cloud you know that it could take days. At least it used to take days.  But still very large and a long time.  At any rate, not very agile.

A lot of clouds still support running VMs because it's a very solid way to do things. But it takes forever make a change and upload a new VM.  Also, moving or lifting and shifting workloads to the cloud got really time consuming and consumed a lot of bandwidth.

As a result of all this a lot of people started building cloud native applications. That is, apps that were built in the cloud as opposed to built on-site and moved to the cloud.

--
So among the many. many paradigms that came about changing the way we deploy things to the cloud, one of them was Docker.

Docker is neat because a docker container is really about the smallest possible deployable thing you can have. Much smaller than an entire VM. And each docker container is isolated and highly scalable.

 - Setting up an environment was easy because you could copy preexisting templates for small pieces, a container for the appserver, another for the database, etc.
 - Deploying apps got a little more complicated because you had to build a container of your app deployed to an app server. But DevOps tooling did most of that heavy lifting once you set it up.
 - Cloning environments became easy because the components were small and easy to move to the cloud.

--
So today, if we want a "Run Anywhere" cloud solution Docker can be a good choice. It's small, lightweight, and portable across Linux platforms. It can require a substantial retooling of your devops and deployment scripts, but once that should be a one time cost.




Thursday, January 19, 2017

Adventures in Raspberry Pi

I got my pi3 and the starter kit including a 16gb microSD card with NOOBS installed. I hooked it up to my home TV and let it install Raspbian. Then I set up the locale, enabled SSH, enabled VNC, and set a new password, all using sudo raspi-config. And then I shut it down with a sudo halt -h and walked it over to my office where the router lives.

Now it's wired to my home router (the cable didn't reach to the living room) and I can ssh into it from my laptop.  First I look at the router config to see which IP address it got assigned. In this case 192.168.1.25

I want to generate a key for ssh with this information:
 ssh-keygen -R 192.168.1.25

And now ssh in using ssh 192.168.1.25 -i pi

The authenticity of host '192.168.1.25 (192.168.1.25)' can't be established.
ECDSA key fingerprint is SHA256:3F54ujxVlzKIv6JwQ19/iD3gosKW93Fkdtk+8GiEqEw.

Are you sure you want to continue connecting (yes/no)? yes

And now ssh in using ssh 192.168.1.25 -i pi

And check that the pi can see the router using a ping command:
pi@raspberrypi:~ $ ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.403 ms
64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.353 ms

64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=0.366 ms


Update the software by running 
 sudo apt-get update
 sudo apt-get upgrade

 sudo reboot

Download a VNC and have it connect to the pi3 on the same address 192.168.1.25 

Let's try to get wireless working. Sadly I have a very old router that only supports WEP. So this part is not going to work. I'll disable WIFI for now by entering the following line into the /etc/rc.local file

ifconfig wlan0 down

I'm going to let that run overnight and see if anything happens with the ping ability to hit the router.  The problem I've been having is that after I install pi-hole, and some time passes, it seems to stop working.

I'll pick this up around noon tomorrow

---

Now I'll install pi-hole, going to the pi-hole site for easy instructions.










Friday, December 23, 2016

Starting again on a Mac

I've been using a Mac for a month or more, but mainly for non-coding work. I still have a small number of windows servers set up in the cloud somewhere that host all my eclipse based tools and heavyweight java runtimes.

I took a train from Pittsburgh to NYC yesterday. 9 hours gave me time to digest a most of Azat Mardan's Practical Node.js book.  Also downloaded Kyle Simpson's excellent "You Don't Know JS" series.

So this afternoon I was just getting started setting up for dev on a Mac. Installed Sublime Text. Installed git, node.js and npm. Installed node-red. Went up to Bluemix and downloaded the bluemix and CF command line tools. Did a git push of projects from my old Windows box, and cloned the ones that made sense do my Mac.

I have a couple goals. 1) Build a UI for my HL7 project and then use API Connect to define and manage the connections to the other parts of that project. 2) Start to investigate replacing BPM UI's with REST based UI's built with some best of breed tooling. 3) Reorganize the Sumo project.

--
JQuery and JQuery UI