IT Infrastructure | Virtualization | Application Delivery | Networking | Storage

Leave a comment

How to install the Kaseya VSA Agent on a non-persistent machine

Deploying Kaseya VSA agents to provisioned workstations and servers from a standard image can be a challenge. Using the typical methods result in duplicate GUID’s due to the nature of the technology making it impossible to accurately monitor and report on these types of machines.

The following procedure has been tried a tested (successfully) numerous times on Citrix VDI/SBC environments leveraging Provisioning Server and Machine Creation Services. That being said, the process should work on other systems that are non-persistent providing they have a persistent volume where the application files can be placed.

The term ‘Target Devices’ used throughout this process is what provisioned desktops and servers are called in Citrix Provisioning server. If you are not familiar with Citrix Provisioning server, think of these as being your non-persistent machines that the image is being delivered to.

  1. The first thing we want to do is create an Organisational Unit (OU) where your update / maintenance target device(s) will live.
  2. Apply a GPO to the maintenance Organisational Unit (OU) which will stop Kaseya communicating with the master server.
  3. Create a new agent package configured to install the agent on the persistent drive, typically, the ‘writecache’.
  4. Create a new version and open the master image / vDisk in maintenance mode (read/write)
  5. Manually install the package onto the new version making sure the GPO is working which prevents communication with Kaseya (Stopping the agent from connecting to the Kaseya master server which in turn updates the KaseyaD.ini file)
  6. Once installed, go into the install location on the persistent drive and scoop out a copy of the folder labelled with the agent ID, this is typically found inside ‘Program Files (x86)\Kaseya’ (You will need this for later)
  7. Stop the Kaseya services.
  8. Delete the following 3 registry keys:
  9. Perform typical cleanup tasks and shutdown the machine.
  10. Promote new version to Test/Production.
  11. Place the folder that was copied in step 3 into the persistent drive on all target devices that are provisioned from the master image / vDisk (This should be placed in the directory Kaseya expects to find it as configured in step 1 (Regedit can be used to confirm this)
  12. Reboot your target devices.

That’s it… you’re done. Provisioned / non-persistent machines should now report into the Kaseya master server with a unique GUID as do standard machines.

If Kaseya services fail to start on target devices it’s likely due to the application files being in the wrong path on the persistent drive. Confirm the application files are where the should be according to the registry.

Very keen to hear how you go. Feedback welcome, thanks




Leave a comment

Intel, Cloud Telemetry and Jevons Paradox

I recently heard on a tech podcast that Intel were contributing to the open source community via their open telemetry framework called ‘Snap’. Snap is designed to collect, process an publish system data through a single API.

Here’s a quick diagram along with the project goals taken form the Github page that will provide you with a simple overview:

Snip20160523_1Project Goals: 

  • Empower systems to expose a consistent set of telemetry data
  • Simplify telemetry ingestion across ubiquitous storage systems
  • Improve the deployment model, packaging and flexibility for collecting telemetry
  • Allow flexible processing of telemetry data on agent (e.g. filtering and decoration)
  • Provide powerful clustered control of telemetry workflows across small or large clusters

The Intel representative went very deep and wide on how Snap works from both operational and development perspectives. All very exciting, especially when a tech company  like Intel uses it’s resources to contribute to open source. Naturally, like many others I do get a little suspicious. Why are they doing this? What’s in it for them? What’s the hidden agenda?

As I was thinking about this, one of the show hosts asked the Intel rep the self confessed cynical question… he stated that it all sounded very useful and technically interesting but why would Intel spend time on this ‘open source’ stuff? What’s in it for them? Intel responded quite honestly about their motives. They said it wasn’t ‘Rocket Science’. They want consumers to buy more silicon… more Intel chips, it’s no secret. They then went on to talk about ‘Jevons Paradox’ which I found very interesting. Intel’s Snap wasn’t an expensive project using a lot of their expertise and engineering resources with no return on investment, it was a project that supports a business model.

Jevons Paradox, sometimes referred to as ‘Jevons Effect’ states that the use of a resource tends to increase rather than diminish the more efficient it becomes. The following diagram clearly shows the concept using the cost of fuel, taken from the Jevons Paradox Wikipedia page:


Using this proven theory, Intel believe that the easier they make the monitoring and the analysis of their chipsets via an open source telemetry framework, the more they will be consumed, meaning customers will buy more Intel chips. The open plugin model means that community and propriety plugins can be loaded into Snap so it can be easily be extended and tailored to meet even the far left field business needs in the analytics space.

The team over at Grafana were quite impressed by Snap, so much so that they created Snap datastore. If you are yet to hear of Grafana an what it does, I briefly discuss it in a previous article found here. In short, it’s an open source dashboard for system monitoring. Having the Snap datastore in Grafana means you can natively create monitoring tasks and metrics without having to jump through a load of hoops.

CPU monitoring in Grafana using the Snap datastore connector: 


For more information on the Intel Snap project visit the Github page here:

And, to read more on the Jevons Paradox, checkout the  Wikipedia page here:

Share the chatter! Cheers







Seven Deadly… Tools.

Like most IT Professionals, I have tried, tested and benefited from a plethora of nifty tools over the years. Some good, some bad… and some, well just down right ugly! Anyway, I have compiled a short list, seven to be exact which are freely available (or at least have a free version) that may be of use to you.

If you are seasoned IT Pro, the chances are you will be familiar with some but probably not all of these. I have included a short description, use case and download link for each. Hopefully some make it into your array of resources!

AD Info:

AD Info can be used to query and report on your Active Directory domains by simply pulling info on AD object such as users, computers, groups and printers. You achieve this using 190+ built in queries with the option of creating your own custom queries and reports.

Use case: This has came in handy in a number of scenarios, including the on-boarding of new clients, Active Directory health checks / reporting and troubleshooting the likes of permission issues and Active Directory replication consistency.


LAN Speed: 

A simple but powerful tool for measuring file transfer, hard drive, USB Drive, and Local Area Network (LAN) speeds (wired & wireless). It does this by building a file in memory, then transfers it both ways (without effects of windows file caching) while keeping track of the time, and then does the calculations for you. Simple concept and easy to use.

Use case: Use this tool to ensure you are getting the expected throughput on the LAN. I recently had success in finding the cause of poor Citrix profile synchronization. I ran LAN Speed between one of the affected application servers and the target profile server and noticed I wasn’t getting anywhere near the 1Gb expected transfer rates, it turned out to be a duplex mismatch on a handful of switch ports.



SuperPuTTY enhances the capabilities of the PuTTY SSH and Telnet client by allowing you to launch it in multiple tabs. It offers you the possibility to easily manage multiple sessions of PuTTY using a single, comprehensive working environment.

Use case: Troubleshoot and easily compare multiple configurations from within a single window.



PingPlotter helps pinpoint network problems in an intuitive graphical way and continues monitoring connections long-term to further identify issues.

Use case: This is my go-to tool for measuring network latency, it’s quite basic but it normally provides enough information to confirm if the network is at fault or not. I had frequent  headaches with a load of branch sites connected to head office over a site-to-site IPSEC tunnels. A quick report from this was enough to prove the server infrastructure innocent and build a case for an MPLS network as the business network requirements had grown over the course of 12 months.



Grafana provides a powerful and elegant way to create, explore, and share dashboards and data. It’s most commonly used for visualizing time series data for Internet infrastructure and application analytics but many use it in other domains including industrial sensors, home automation, weather, and process control.

Use case: Grafana could be a slick front end to your existing monitoring solution, it’s much richer looking than most built in dashboards from ‘off the shelf’ products. I have ran it to monitor server infrastructure in short cycle / load testing projects using some other components to complete the temporarily solution. Those being, Centos for the OS, InfluxDB and Telegraf for database services and metric collection . I will try and put together a walk through on the set up soon…



vCheck (Daily Report) is an awesome Powershell script developed by Alan Renouf of VMware. vCheck produces an HTML report on the status of your vSphere environment.

Use case: Configure a daily scheduled task to run vCheck against your vSphere infrastructure, the report can be emailed to the Service Desk / Engineering teams before the start of business so that problems can be resolved early, mitigating impact to service.



SP_Blitz can quickly flag common SQL Server issues in a matter of seconds, it prioritizes the problems with what’s broken or dangerous giving you a clear view of what needs tackled first. I’ve never had any issues with SP_Blitz but if you are looking to adopt this, make sure your trial on a dev server before running in a production environment.

Use case: SP_Blitz is a great place to start when doing SQL server health checks. You can also run it as a scheduled task to automate regular checks on your SQL Server infrastructure and get the heads up before those issues become real problems and affect production services. The following link provides a 5 minute demo from the developer, Brent Ozar.

sp_Blitz® – SQL Server Takeover Script


Go nuts and share…