Thursday, 1 October 2015

Why make things consistent?

Things should be consistent.  When we're talking about I.T. artefacts (how we build databases, how we structure database tables, how we write code, etc etc) we should build them consistently.

This ought to be obvious.  But wherever I go, I see quite the opposite.  Many software applications, or I.T. departments, or enterprises, are basically a collection of various artefacts, all of which have been built INCONSISTENTLY.  And then people wonder why it's so hard to manage all this.

The reasons things become inconsistent are many.  Here are a few examples:

- Many different developers looking at the same application over time, but not paying any attention to existing patterns.  In other words, for whatever reason (maybe they haven't been trained to see in this way), they don't actually SEE the pre-existing patterns.

- Developers think they have a better way of doing something so they just start doing things their way, without any regard for pre-existing patterns.  In other words, they see the patterns, but they don't care.  They don't like the patterns.  They think their patterns are better.  So they introduce new patterns.  Now you've got at least TWO sets of patterns.

Now multiply this by the number of developers who don't see, then add that to, then multiply by, the number of developers who don't care.

What you end up with is a mess.  It contributes to unwanted technical debt.  Somebody's going to pay that debt someday, one way or another.

Why make things consistent?

When you make things consistent, it reduces maintenance cost and effort because

- you don't have to think about certain things.
- you don't have to wonder, is this one using method A or method B?
- you can apply consistent methodologies to manipulating these artefacts.
- those methodologies will be guaranteed to work because you know the structures are consistent.

Here's an example.

If you're building database tables, let's say you want to know when a record was created; who created it; when it was last modified; and maybe a timestamp.  Why not make that consistent across every table in the database?

Why would anyone in their right mind want to make it different from one table to another?  And yet, I see this all the time.  The reason is, nobody has taken responsibility for ensuring they're consistent.  It's not going to happen by accident.  Somebody needs to own it.  Maybe that somebody is you.

Over time, different developers create tables; some happen manually; some are scripted; nobody's made the methodology consistent, so stuff evolves organically over time.  Then somebody comes along and tries to make sense of it all, and what they find is essentially an unmaintainable mess.

Again this ought to be obvious.  But it just isn't, to some folks.  This is for them.  Maybe one of them will run across this one day, and it might spur them to say, hey, maybe I can make MY stuff consistent too!  :-D









Sunday, 16 August 2015

npm outdated + npm update is your friend

In the last year or two I've been getting my head around the whole world of front-end web development with AngularJS and its associated infrastructure (nodejs, npm, bower, gulp, etc).  A lot of that has to do with the fact that with my current client, I've been working on projects that use all that technology (where they have Microsoft's ASP.NET WebApi on the back-end, and all this other stuff on the front-end).

It seems to me a lot of .Net developers shy away from really learning all that stuff, or they just learn what they need to know to get the job done, but I think all this stuff is pretty cool and I've been trying to learn it all on a slightly deeper level.

So when you get into the whole npm thing you quickly learn that development tools, libraries, packages, plug-ins etc. that are required by your project are stored in a file called package.json.  And that the versions of those packages are managed with symbols using a syntax called semver.

So, having inherited these projects I started looking at the choice of plug-ins used by my predecessors and after a bit of research I decided they seemed generally like pretty good choices.  You specify those plug-ins by using the devDependencies object in package.json.  Here's an example copied from the IDE I'm also learning, which is WebStorm:

{
  "name": "dBWeb",
  "version": "1.0.0",
  "private": true,
  "description": "Dave Barrows AngularJS Web Project Boilerplate",
  "scripts": {
    "init": "npm install",
    "install": "bower install",
    "test": "karma start"  },
  "devDependencies": {
    "angular-jsdoc": "~0.3.8",
    "cli-color": "~0.3.3",
    "commander": "~2.5.0",
    "del": "~1.2.0",
    "gulp": "^3.8.8",
    "gulp-angular-filesort": "^1.0.4",
    "gulp-angular-htmlify": "~2.3.0",
    "gulp-angular-templatecache": "^1.3.0",
    "gulp-autoprefixer": "~2.3.0",
    "gulp-bytediff": "^0.2.0",
    "gulp-concat": "^2.3.3",
    "gulp-docco": "^0.0.4",
    "gulp-filelog": "~0.4.0",
    "gulp-filter": "~3.0.0",
    "gulp-htmlmin": "~1.1.3",
    "gulp-imagemin": "~2.3.0",
    "gulp-inject": "^1.0.1",
    "gulp-jscs": "~2.0.0",
    "gulp-jsdoc": "~0.1.4",
    "gulp-jshint": "^1.7.1",
    "gulp-load-plugins": "^0.6.0",
    "gulp-load-utils": "^0.0.4",
    "gulp-minify-css": "~1.2.0",
    "gulp-minify-html": "~1.0.4",
    "gulp-msbuild": "^0.2.4",
    "gulp-newer": "^0.5.0",
    "gulp-ng-annotate": "~1.1.0",
    "gulp-rev": "^1.1.0",
    "gulp-rev-replace": "^0.3.1",
    "gulp-sourcemaps": "^1.1.5",
    "gulp-task-listing": "^0.3.0",
    "gulp-uglify": "^1.0.1",
    "gulp-util": "^3.0.1",
    "gulp-watch": "^1.0.7",
    "gulp-zip": "^2.0.2",
    "handlebars": "~2.0.0",
    "jshint-stylish": "^0.4.0",
    "karma": "^0.12.24",
    "karma-chai": "^0.1.0",
    "karma-chrome-launcher": "^0.1.5",
    "karma-cli": "0.0.4",
    "karma-mocha": "^0.1.9",
    "karma-ng-html2js-preprocessor": "^0.1.2",
    "karma-phantomjs-launcher": "^0.1.4",
    "karma-sinon-chai": "^0.2.0",
    "karma-xml-reporter": "^0.1.4",
    "lodash": "~2.4.1",
    "merge-stream": "^0.1.6",
    "mocha": "^1.21.5",
    "q": "~1.0.1",
    "yargs": "^3.11.0"  }


So if you go to the command line (I usually work on Windows machines at work, but at home, as I study this stuff, I'm working on a Mac.  But the commands work the same once you've got node installed).  And I issue the following command ("npm outdated"), and it gives me the following output:

davids-mbp:dBWeb davidwbarrows$ npm outdated
Package                   Current   Wanted      Latest  Location
cli-color                   0.3.3    0.3.3       1.0.0  cli-color
commander                   2.5.1    2.5.1       2.8.1  commander
gulp-load-plugins           0.6.0    0.6.0  1.0.0-rc.1  gulp-load-plugins
gulp-rev                    1.1.0    1.1.0       5.1.0  gulp-rev
gulp-rev-replace            0.3.4    0.3.4       0.4.2  gulp-rev-replace
gulp-task-listing           0.3.0    0.3.0       1.0.1  gulp-task-listing
gulp-watch                  1.2.1    1.2.1       4.3.4  gulp-watch
gulp-zip                    2.0.3    2.0.3       3.0.2  gulp-zip
handlebars                  2.0.0    2.0.0       3.0.3  handlebars
jshint-stylish              0.4.0    0.4.0       2.0.1  jshint-stylish
karma                     0.12.37  0.12.37      0.13.9  karma
karma-chrome-launcher      0.1.12   0.1.12       0.2.0  karma-chrome-launcher
karma-cli                   0.0.4    0.0.4       0.1.0  karma-cli
karma-mocha                0.1.10   0.1.10       0.2.0  karma-mocha
karma-phantomjs-launcher    0.1.4    0.1.4       0.2.1  karma-phantomjs-launcher
karma-sinon-chai            0.2.0    0.2.0       1.0.0  karma-sinon-chai
lodash                      2.4.2    2.4.2      3.10.1  lodash
merge-stream                0.1.8    0.1.8       1.0.0  merge-stream
mocha                      1.21.5   1.21.5       2.2.5  mocha
q                           1.0.1    1.0.1       1.4.1  q
davids-mbp:dBWeb davidwbarrows$ 
So that's nice; now I can make decisions about whether I want to update to the latest version.  In the course of this I decided, for my stuff, I will generally use the tilde as opposed to the carat (see this for more info).  So with the above output, for example, I can say things like, okay, I want to update the "gulp-rev" plug-in to its latest version.  So I go back into WebStorm and change this:

"gulp-rev": "^1.1.0",

to this:

"gulp-rev": "~5.1.0",

And then I go back to the command line and issue the "npm update" command:

davids-mbp:dBWeb davidwbarrows$ npm update
gulp-rev@5.1.0 node_modules/gulp-rev
├── object-assign@2.1.1
├── rev-hash@1.0.0
├── through2@0.6.5 (xtend@4.0.0, readable-stream@1.0.33)
├── sort-keys@1.1.1 (is-plain-obj@1.0.0)
├── rev-path@1.0.0 (modify-filename@1.1.0)
└── vinyl-file@1.2.1 (graceful-fs@4.1.2, strip-bom-stream@1.0.0, strip-bom@2.0.0, vinyl@0.5.1)
davids-mbp:dBWeb davidwbarrows$ 
And npm sees that the current version of that package is now later in package.json, so it updates it to the latest version.  (In some cases, obviously, you might decide you actually want to be on an earlier version for whatever reason; say the tool you're using has its own set of dependencies and if you upgrade your tool it might break something else.  The whole semver thing lets you control that as desired).

The point of all this is sort of two-fold:  

a) if you happen to be responsible for managing such applications, this is a way of ensuring you go through them all one by one, so (by googling the package name for example), you begin to really understand the purpose of each plug-in used by your application; and 

b) that modern apps consist of many moving parts, and it's not enough to just blindly accept them; these lists of tools (like package.config in the NuGet world) or package.json in the npm world, require some care and feeding.

Maybe this kind of maintenance task would not happen once you were in the middle of a project.  If it ain't broke, don't fix it.  But if you're in between projects, say, or you are in more of an architecture role and it's your job to stay on top of this stuff, then it seems like a good idea to periodically go through your list of developer dependencies and ensure they're updated to the latest version that makes sense for your particular situation.  Developers are out there working on these tools, and improving them, so chances are you're better off with a newer version than an old outdated one.  (Obviously also, you've got to test your app when you make such a change, make sure upgrading is the right thing to do in your case, etc. etc.)

How do you manage such issues?  Feedback is welcome.





Sunday, 22 March 2015

Recover Azure VM password (Reset Azure VM password)

If you find you've changed your password on your Azure Virtual Machine, but then subsequently forgotten it (and I swear I was entering the correct password, but apparently not), then you google phrases like "recover azure vm password" etc, you may run across blog posts from Microsoft like the one found here.

There are two things to note which are not entirely obvious from the steps listed there.  One is, if you don't use PowerShell every day you may launch the PowerShell command prompt and not the ISE as they suggest.  You do in fact want the ISE, not the command prompt.  For example on a Windows 2008 server machine the ISE is under Accessories... Windows PowerShell... Windows PowerShell ISE.

The other thing is, if you're trying to reset the password of your VM, the VM actually has to be powered up.  (Duh!)  In retrospect this is obvious; how can you reset the password of your VM if the VM is not actually powered up?  You can't.

But if it is powered up (which you do by going to the Azure management portal, going to "All Items", clicking on the VM in question, and clicking the "Start" button), wait for it to power up (I find I have to refresh the page to actually see the status), then go through the steps in those MS blog posts, then you'll see different behaviour in the PowerShell ISE.  When you get to the last step, to run the snippet that actually resets the machine, you'll be given a list of VMs and you can pick the one you want.  I actually only had one VM so that's the one I picked.  But the behaviour obviously makes much more sense given that the VM is now powered up and able to receive the PowerShell commands that remotely cause its password to be reset for the specified account.

Finally one more thing:  there's a point where they tell you to issue this command:

Select-AzureSubscription –Default $subscription

I actually got an error at that point; PowerShell was complaining about the parameters.  I found that if I simply did not enter the "-Default" parameter it still worked.  In other words I typed

Select-AzureSubscription $subscription

Anyway the procedure did ultimately work and allowed me to reset my password.  It was a relief to find I could still access my VM and didn't have to start from scratch rebuilding it.  (Incidentally if you're not that far along building your VM, you can simply delete the VM and create a new one, but obviously after a certain investment of time that option becomes less attractive).

Update 2015-09-14:  you can now use the windows azure portal user interface to reset your password.  This appears to work well, albeit slowly.





Thursday, 19 March 2015

Elmah + ASP.NET WebAPI tip...

I had one of those situations today where you're looking at various blog posts and stack overflow posts and everybody's essentially saying, "it's easy, you just add the NuGet package, issue a couple lines of configuration code, and it works."  But I was scratching my head for awhile until I connected the dots.  So I thought I'd post a small tip here in hopes of saving somebody a minute or two.

So, if you want to add ELMAH to your ASP.NET WebAPI project to log unhandled exceptions, a bit of googling will quickly lead you to the elmah-contrib-webapi project, and helpful posts such as this one and this one, as well as various stack overflow posts.  I tried the stuff recommended there but for some reason couldn't get it working.  Having used ELMAH previously on an MVC project, I knew that the basic steps are:


  • Install a NuGet package
  • Tweak your web.config
  • Point to where you want to log to (on my previous project it was logging to SQL Server, but in this case the client wanted to log to XML files).
If you do all that, then what you get is, when an unhandled exception occurs (which you can simulate by simply throwing a new Exception in C# code), the error gets logged, and you get a handy web page at the URL http://yourwebsite/elmah.axd).

So here's what wasn't entirely obvious about the Elmah Contrib WebAPI instructions in those various posts:  if your web API happens to still live in the context of an MVC site, as is the case on this project, then I found to get the desired behaviour, I actually had to install TWO NuGet packages, one for the web API, and one for the MVC site.  (In our VS solution, the WebApi functionality happens to reside in the context of an MVC project; it's an AngularJS website, but the MVC routing is simply used to go to the index.html page for the single page app).

Anyway, so here's what I did that made it work:

  • Installed the Elmah.Contrib.WebApi package using the NuGet package manager, as described here.  (I also modified the Application_Start method as shown on that same web page).
  • Installed the Elmah.MVC package as described here.
  • Added the following line of code to the WebApiConfig class as described here.

config.Services.Add(typeof(IExceptionLogger), new ElmahExceptionLogger());

  • Modified the web.config to configure Elmah as desired, as described here.  (In my case I used the XML logging).
  • Then, to test, I added a "throw new Exception( )" statement to a method in a WebApi controller.
  • Then, when you're browsing to your WebApi project (say you push the "go" button in VisualStudio to debug, then when the web page comes up, you add "/elmah" onto the end of the URL in the address bar of the browser; at this point Elmah should be working, and you should see the Elmah page.
  • Then I pointed the Angular site to a page where, upon pushing the submit button, it would call that method and throw the exception.
  • Now, you should see the exceptions being displayed on the Elmah page, and (in my case) the XML files that are being written to disk.
Hope this helps someone.  
cheers, dB

Saturday, 14 February 2015

JavaScript libraries on my current project

I've finally got my foot in the door of the AngularJS world and I'm pretty happy to have this opportunity.  The project I'm on uses an ASP.NET WebAPI back-end, and the development environment is Visual Studio with Resharper, but otherwise it's pretty much a front-end JavaScript based project whose architecture was designed by some pretty smart guys.  They're using AngularJS but also making use of the Flux architectural pattern.  If you're coming from a .Net development background this is a completely different way of thinking and it's quite an eye-opener.  But what I like about it is that it has a very good separation of concerns, it's very modular and decoupled, makes good use of the CQRS pattern, and (if we adhere to the patterns these guys have established, which I hope to write about in a future post), seems to lend itself to best practices in software architecture.

I've been reading a few books on it as mentioned in a previous post.  But this current contract has introduced me to a number of interesting JavaScript libraries I thought I'd mention here.  The lead developers at this organisation seem quite cutting edge and have clearly eschewed certain libraries in favour of others which many in the community seem to agree are evolutionary steps forward.

The open source community seems to move a lot faster than a community whose tools and techniques are tied to a particular vendor.

For example, on this project, the lead developers are using:

lodash instead of underscore.

gulp instead of grunt.

sinon, mocha and karma instead of just jasmine and karma by themselves.

They also swear by Resharper and use custom templates to generate the JS files that make up the "modules" (i.e. components) of their architecture.  Again I hope to write about this in more detail in a future post.  Here I just wanted to mention a few interesting tools and techniques they're using.







Wednesday, 14 January 2015

Where is Chutzpah installed?

I've been going through Shawn Wildermuth's Pluralsight course on Bootstrap, AngularJS, ASP.NET, EF and Azure.  There's a point toward the end of module 9, "unit testing", where he installs the Chutzpah test runner.  At the time the videos for the course were recorded, Chutzpah lived on Codeplex, but now it lives on Github.

In the video, Shawn installs Chutzpah, then goes into a bit about running Chutzpah from the command prompt.  That all seemed like a bit of magic to me, sort of like when the cooking shows just suddenly show you the cake that's already been baked, and it was not obvious to me where the .exe for Chutzpah actually got installed.  So for what it's worth, although I installed the Visual Studio extension as described in the video (using the Tools... Extensions and Updates... menu option in VS2013), I subsequently installed Chutzpah as shown here, then after some head-scratching, simply did a search of my local drive in Windows and found the console .exe actually gets installed in the "packages" directory of the solution, as shown here:


(i.e., in my case, that path was):

C:\Users\DavidBarrows\Documents\Visual Studio 2013\Projects\MessageBoard\packages\Chutzpah.3.2.6\tools

Hope this helps somebody.

Cheers, dB