Select Page

How to Install XO Xen-Orchestra from Sources

To ensure terminology is fully understood, XOA stands for Xen-Orchestra Appliance. It is the commercial side of Xen-Orchestra, which to add to confusion, also offers a free version of XOA with a slightly limited feature set.

So when we say “Install XO from Sources” what this actually means is that we’re installing the open source Xen-Orchestra, not the commercial XOA aka. Xen-Orchestra Appliance.

One important point that many people don’t realise is that Xen-Orchestra can actually manage the physical host server where it is installed within a virtual machine on that server. Pretty cool, right.

 

Step 1 – Create a Virtual Machine Running Debian 11 Bullseye

Firstly, give this virtual machine as much vCPUs and RAM that you can spare on the initial setup as this is a fairly resource intensive task getting this setup. Thankfully there is a handy XenOrchestraInstallerUpdater piece of software on GitHub from Roni Väyrynen and other contributors.

I’ve chosen the above as at the time of writing Debian 11 Bullseye is the latest supported Debian version that the above script supports.

 

Step 2 – Understand Sudo Permissions Have Changed in Debian 10 + 11

Yup. Your usual ‘sudo –s’ command and SSH’ing in as a root user simply no longer work on Debian 10 + 11. A rather annoying undocumented piece of information given the significance of this change. But hey, that’s why I’m writing about it here.

By default on Debian 10 and Debian 11, the root user is not included in the sudoers list, so you can’t SSH in using the user ‘root’. And on top of that, if you try to SSH into your Debian 10 or 11 virtual machine as ‘{YourUsername}’ that you configured on setup, you’ll notice that if you run the common command ‘sudo –s’ to switch to the root user, you get an error saying that your username isn’t in the sudoers list.

Say what?

Anyhow, the new command seems to be “su –“ which behaves in the same way and gets you to where you need to be.

 

Step 3 – Install Git

We’re going to be pulling in and running the code from the GitHub repository mentioned earlier, so to do this we need to ensure Git is install on the virtual machine so that it’s possible to download the code we need for XenOrchestraInstallerUpdater.

Simply run the command;

 

apt-get install git

 

Step 4 – Clone XenOrchestraInstallerUpdater Git Repository

Next step is to get the code from GitHub down onto your virtual machine. So simply run the command;

 

                git clone https://github.com/ronivay/XenOrchestraInstallerUpdater.git

 

This will download the code to the directory you are currently in. Make a note of this and make sure you’ve put this in a sensible place that you’re going to be able to find later on.

 

(Optional) Create a Self-Signed SSL Certificate

The reason why this step is optional is that for playing around you don’t really need to bother with this. But obviously for production environments you want to keep security at the forefront of your mind. To be clear, this isn’t a recommendation for how to use HTTPS in a production environment, this is simply a quick way of getting SSL up and running should you choose to experiment with this;

 

cd /etc/ssl

mkdir xo

cd xo

openssl req –newkey rsa:4096 –x509 –sha256 –days 3650 –nodes –out xo.crt –keyout xo.key

 

Step 5 – Create a Copy of Configuration Files

Thankfully Xen-Orchestra comes with some handy configuration files so you don’t need to start from scratch. To copy these and make them your own simply run the following command;

 

cp sample.xo-install.cfg xo-install.cfg

 

This is the copy command, aka ‘cp’. This is saying copy the file ‘sample.xo-install.cfg’ and then paste the file with a new filename to ‘xo-install.cfg’. The joys of Linux commands.

 

(Optional) Configure your Xen-Orchestra Configuration File

To do this, simply run the command;

 

                nano xo-install.cfg

 

Now, you only need to do this if you’ve implemented the optional step above to create an SSL certificate. If you have done this, you need to specify the location of those certificate files within your Xen-Orchestra configuration file. As an example following on from the above, this is how your configuration file would look;

 

# Location of pem certificate/key files. Installation will automatically configure HTTPS if these are defined. Remember to change PORT variable as well.

PATH_TO_HTTPS_CERT=/etc/ssl/xo/xo.crt

PATH_TO_HTTPS_KEY=/etc/ssl/xo/xo.key

 

Save the file and exit.

 

Step 6 – Install Xen-Orchestra (XO) from Sources

Simply run the following command and watch as things progress;

 

./xo-install.sh

 

Wait a while, watch things progress and then you’re good to go.

 

Step 7 – Confirm Setup Works

You’ll know that the setup works for two reasons. Firstly, the console will have lots of success messages and will tell you that everything has completed successfully. Secondly, you’ll now be able to access the IP address of the virtual machine in your web browser to login to Xen-Orchestra, the completely free and unlimited feature access to the platform so you can play around to your hearts content. Job done.

The default username and password for XO Xen-Orchestra is;

 

Username:         admin@admin.net

Password:           password

 

Naturally I don’t need to remind you that this is extremely insecure, so make sure to change this once you’ve got this setup.

 

Summary

So that’s how to get XO aka. Xen-Orchestra set up on a virtual machine inside a physical XCP-ng Server so that you can have a play around. You’ll notice that this is slightly different than how to get XOA setup via the Quick Deploy feature once you’ve got XCP-ng Server setup, but that’s a fairly straight forward click and configuration setup, but with limited features at the end of it.

The above shows you how to get the basics set up in no time with relative ease and one core dependency on the script from GitHub. The contents of this blog post have been pulled together from various snippets of insight around the internet to provide a clear guide for how to do this, with a couple of key mentions being LTS and Vates and Roni Väyrynen for both help with online video content and conversations to help identify the nuggets of information joining the dots together with the information for how to get the above setup with ease, at least the missing nuggets of information in my brain – it may be straight forward to others!

How to Setup NICs on XCP-NG Centre on First Setup for Proper VLAN Configuration

This is quite a complex topic, and one that all depends on your specific setup. So we’re going to take a look at some of the options for how to get this set up with ease. Is what we’re going to talk through the best practice? Not sure. But I’ve not found any useful information online for a better approach, so to quote Admiral Adama from Battlestar Galactica when asked the question “So what’s your plan here?” he answers “Personally, I tend to go with what you know. ‘Til something better turns up”. So if the readers of this blog post have any better ways of doing this, please do share in the comments.

 

Physical Network Hardware

So let’s get some basics documented. The reality is that every single physical network hardware is going to be different for every organisation. But fundamentally and at the most basic level everyone has at a minimum: Modem –> Firewall –> Managed Switch –> XCP-NG Server –> Virtual Machine(s)

 

 

The reality is that in most home setups when people are getting started, most people start off with a single device supplied by the ISP (Internet Service Provider). Often these devices act as everything and nothing, so they perform the absolute basic requirements of a modem, a firewall and a switch, but fail on most things as they can’t handle VLANS, they can’t handle complex firewall rules and they can’t effectively manage complex networks. Typical jack of all trades, master of none.

Ultimately if you are in this home lab setup and wondering why things don’t work. It’s highly likely because you simply don’t have the physical hardware that you need to achieve what you want. Consumer grade hardware simply doesn’t cut it, even the high end consumer grade hardware as this is often designed to visually look powerful through curves/edges and more antennas in design rather than actually being powerful under the hood. Unfortunately in the UK we don’t particularly have the equivalent of an ecommerce store for this kind of stuff to make life easy, the way things work is often behind complex business to business commercial relationships and ‘special’ pricing etc. It’s an absolute nightmare. Perhaps I need to get around to building a business model to help to simplify the supply chain for these things, when I get the time. But hey, I digress. Back to the topic at hand.

Now things get a little more complex than the above as there are physical cables that are connecting the different physical devices, so let’s expand the above architectural diagram to understand the next level of detail. Here we have the setup whereby the firewall has 3x physical ports on the device and we have configured the firewall to have a simple setup of one port = one VLAN.

In reality, many VLANs often share a single physical port on the physical device, but let’s leave that for another blog post to cover off. So then we have the managed switch with the basic setup of 3x inbound connections and 3x outbound connections, each matching a one to one relationship to keep things simple. Again, the reality of any real world implementation is that we use things called Trunk Ports which is just a term that means a physical port that allows traffic to flow to the next physical device for every VLAN and untagged (aka. non-VLAN traffic).

What this ultimately leaves us with is the setup as follows;

 

 

So we have 3x physical RJ45 cables connected between the physical firewall and the physical managed switch, and then another 3x physical RJ45 cables connected between the physical managed switch and the physical type-1 bare metal hypervisor XCP-ng Server. Everyone’s setup is going to be different, but if you’re working with Ubiquiti/UniFi then this guide on how to setup a UniFi managed switch is going to be handy.

But, we’ve jumped ahead a little. You’ll notice in the above diagram that the virtual machines are magically attached to the correct VLANS. But how? You may ask.

 

How to Configure XCP-ng Centre Networks on First Setup

When you first get XCP-ng Server setup and XCP-ng Centre setup, you’ll notice that when you plug in all the Ethernet cables that you are simply presented with a bunch of un-identified networks that you need to figure out what to do with;

 

 

The networks that are listed here within XCP-ng Centre are directly related to the physical ports that you have available on your XCP-ng Server. So you’re probably thinking, awesome, let’s get started and get a virtual machine spun up and get cracking. But it’s not so straight forward. How do you know which network to attach the virtual machine to? After all, your choice here can have security implications depending on your use case. You don’t want a publicly accessible virtual machine to have access to your management network.

So we need to get into the topic of how to identify what networks in XCP-ng Centre map to what VLANs so that you can label them correctly and ultimately use them properly within your configuration. As with everything important, naming conventions and terminology is extremely important so make sure you agree this within your own organisations as appropriately.

The best way I’ve found to easily get this mapped in the first instance is actually quite simple, while not obvious, and it also feels quite rudimentary, it does work. Simply create multiple virtual machines, each one dedicated to a single network, then see what IP address it gets assigned from your DHCP server (likely managed on your firewall and enforced through VLAN configuration in the firewall and managed switch).

 

 

This is assuming you’ve already got your XCP-ng Centre configured to connect to your ISO Storage Repository for loading operating systems.

This step certainly feels a bit like using a sledgehammer to crack a nut, but it works. But I’ve yet to find a simpler way to achieve this. Once you’ve gone through this process and run the equivalent of “what’s my IP” (aka. ipconfig or ifconfig etc.) on the virtual machines once you’ve set them up and SSH’d into them, you’ll know definitively which network they are configured against.

So now you need to rename your networks within XCP-ng Centre.

 

 

You’ll notice in the above screenshot that we don’t have the VLAN column populated, and that is purely because there are pending internal decisions to be made around the above setup on this specific use case, but you get the idea of how you can make sure the Network Interface Cards (NICs) map through to the XCP-ng Centre networks.

As with everything in IT “It depends” 😀

 

Summary

The reality is that what we have gone through in this blog post is an extremely basic setup in the real world. What I hope has been conveyed throughout this blog post is one practical way of mapping when things are physically plugged-in to different network ports and how we can determine which actual networks they belong to before we start deploying different virtual machines for different use cases.

How to Setup XCP-NG Tools on Debian 11 Bullseye

Ok, so we’ve done a recent blog post on how to setup XCP-NG Tools on CentOS 7. So this blog post is going to be a rather similar blog post which a few nuances.

 

Pre-Check

You’ll notice that XCP-NG Tools is not installed when you check the Performance tab against your Debian 11 Bullseye virtual machine as you’ll see the message “To view memory data for this VM, you need to install XCP-ng VM Tools”;

 

Step 1 – Insert the guest-tools.iso in the Virtual Disk

To do this, go into your virtual machine within XCP-NG Centre, then into the Storage tab, then make sure that within your DVD Drive you have the XCP-NG VM Tools is selected;

 

 

Step 2 – SSH Into the Virtual Machine

Next step is pretty straight forward, SSH into your virtual machine so you can run the commands to install XCP-NG Tools on Debian 11 Bullseye.

 

Step 3 – Switch to Sudo

This step is one that isn’t particularly well documented online. As of Debian 10 Buster release and also continued within Debian 11 Bullseye release, to switch to the sudo user you need to run the command;

 

sudo –

 

 

 

Step 4 – Mount the CD-ROM/DVD Drive

This step is to ensure that your Debian 11 Bullseye can access the contents of the virtual CD-ROM/DVD Drive that we configured within step 1. To do this, simply run the command;

 

mount /dev/cdrom /mnt/

 

What the above command is saying is…..Mount the folder path at /dev/cdrom to the folder location /mnt/ – There is nothing more to it than that.

 

Step 5 – Run the install script for XCP-NG Tools

Now that your Debian 11 Bullseye virtual machine has access to the contents in your virtual CD-ROM/DVD Drive which contains the guest-tools.iso contents, it’s time to run the installation script.

 

bash /mnt/Linux/install.sh

 

Follow the steps and you’re on your way. You’ll know that XCP-NG Tools has been successfully installed because you’ll get a confirmation message;

 

And you’ll also note that within the Performance tab against your Debian 11 Bullseye virtual machine that the memory data starts to come through in a matter of minutes at the absolute most, in most cases in about 30 seconds, even without rebooting;

 

 

Summary

When you follow these simple steps, it’s a few clicks and commands to get XCP-NG Tools up and running on Debian 11 Bullseye via XCP-NG Centre. For every virtual machine that you setup, make sure you install XCP-NG Tools to make your life as easy as possible.

How to Setup XCP-NG Tools on CentOS 7 Using XCP-NG Centre

Short blog post with a simple few commands to get you up and running quickly. XCP-NG Tools is also commonly referred to as Xen Tools, even though the two are technically different, XCP-NG Tools is based on Xen Tools, but that’s a topic for another day.

 

Pre-Check

You’ll notice that XCP-NG Tools is not installed when you check the Performance tab against your CentOS 7 virtual machine as you’ll see the message “To view memory data for this VM, you need to install XCP-ng VM Tools”;

 

Step 1 – Insert the guest-tools.iso in the Virtual Disk

To do this, go into your virtual machine within XCP-NG Centre, then into the Storage tab, then make sure that within your DVD Drive you have the XCP-NG VM Tools is selected;

 

 

Step 2 – SSH Into the Virtual Machine

Next step is pretty straight forward, SSH into your virtual machine so you can run the commands to install XCP-NG Tools on CentOS 7.

 

Step 3 – Mount the CD-ROM/DVD Drive

This step is to ensure that your Linux CentOS 7 can access the contents of the virtual CD-ROM/DVD Drive that we configured within step 1. To do this, simply run the command;

 

mount /dev/cdrom /mnt/

 

What the above command is saying is…..Mount the folder path at /dev/cdrom to the folder location /mnt/ – There is nothing more to it than that.

 

Step 4 – Run the install script for XCP-NG Tools

Now that your CentOS 7 virtual machine has access to the contents in your virtual CD-ROM/DVD Drive which contains the guest-tools.iso contents, it’s time to run the installation script.

 

bash /mnt/Linux/install.sh

 

Follow the steps and you’re on your way. You’ll know that XCP-NG Tools has been successfully installed because you’ll get a confirmation message;

 

 

And you’ll also note that within the Performance tab against your CentOS 7 virtual machine that the memory data starts to come through in a matter of minutes at the absolute most, in most cases in about 30 seconds, even without rebooting;

 

 

Summary

When you follow these simple steps, it’s a few clicks and commands to get XCP-NG Tools up and running on CentOS 7 via XCP-NG Centre. For every virtual machine that you setup, make sure you install XCP-NG Tools to make your life as easy as possible.

How to Increase Connection Timeout in Apache Tomcat 8

This is one that comes up fairly infrequently and one that always seems to take way more Googling than it should to get a straight answer on this. When in fact, the answer to this question is an extremely simple one, well, usually.

If you are getting regular or intermittent errors with your scripts timing out, count the seconds. Most people are used to having a default timeout with something like Apache Httpd to be 30 seconds. Well, Apache Tomcat likes to play by it’s own rules here and doesn’t quite follow that standard which may leave you scratching your head for a while until you get to the bottom of it.

Here’s what the official Tomcat 8.5 documentation says on the topic;

“The number of milliseconds this Connector will wait, after accepting a connection, for the request URI line to be presented. Use a value of -1 to indicate no (i.e. infinite) timeout. The default value is 60000 (i.e. 60 seconds) but note that the standard server.xml that ships with Tomcat sets this to 20000 (i.e. 20 seconds). Unless disableUploadTimeout is set to false, this timeout will also be used when reading the request body (if any).”

Simple, right?

Basically, just check what you’ve got in your server.xml file and adjust accordingly as it’s not as straight forward as it should be.

But where is that file you may be wondering. Well, by default you’ll find the file at /{TOMCAT_HOME}/conf/server.xml so check there first.

This home directory is going to be different depending on operating system and how Tomcat was installed, so you may have to hunt around for this if you’ve not edited these settings before. Make a note of this file location as it will come in handy over time.

The line you are looking for is;

<!--60 Seconds-->
<Connector URIEncoding="utf-8" connectionTimeout="60000" port="8084" protocol="HTTP/1.1" redirectPort="8443"/>

 

Simply change the value of the connectionTimeout attribute to what you need, restart Apache Tomcat 8 and re-run the script you were trying to execute which was timing out, things should work now.

Happy config!

How to Troubleshoot Ping ICMP Not Working

First of all, networking is a challenge in itself to understand what you want to achieve and how to configure things to make them work. It’s made even harder when basic things that should just work, for some reason don’t due to one reason or another. This blog post is off the back of one of those niggles that was driving me crazy trying to understand why something wasn’t working when it should have been and it turned out the answer was pretty simple when I finally got to the bottom of it.

Before we jump into the details, let’s make sure we’re all on the same page.

 

What is Ping?

Ping is a command line utility that is designed to test the reachability of a hostname or IP address. In simple terms, it’s a way of your computer saying “hello” to another computer and waiting to see if the other computer responds and says “hello” back – or just ignores you.

To use the ping command, it’s as simple as running either of the following commands via your command line either on Windows or Linux and is often packaged on Linux systems through the iputils package from Yum etc.

 

ping google.com

Which will respond as follows if a successful connection is made

 

 

ping 8.8.8.8

8.8.8.8 is the primary IP address of Google’s public DNS system if you’re not aware, with 8.8.4.4 being the secondary IP address. When ping’ed this will respond as follows if a successful connection is made

 

ping a-website-that-doesnt-exist.com

Here’s what happens when you don’t get a successful response such as when in this example, there is no DNS A Record against that hostname so the hostname cannot be successfully translated into an IP address.

 

ping 192.168.3.201

Now let’s say that you have a computer on your network that you know exists and you know it is turned on. When you try to ping that IP address, you should receive a successful message as shows above. But in some situations you may get errors such as “request timed out”

 

This is where things get a little fun trying to debug these things, which we’ll cover off in this blog post.

 

Ping Summary

The above various examples are what both success and failure messages look like when using the ping command line utility. So you know what to look out for when debugging why ping isn’t working.

 

What is ICMP aka. Internet Control Message Protocol?

The ping command line utility utilises ICMP, the Internet Control Message Protocol. I’m not going to go into too much detail with this one as we could go down a fairly deep rabbit hole around the Internet Protocol Suite, IEFT and RFCs (Request for Comments) such as RFC1918 which is the protocol that outlines what IP address ranges can be used publicly and which ones are reserved for private usage. We did a blog post covering what RFC1918 is a few years back to help people understand this in a simple way.

Anyhow, to keep things simple ICMP is essentially a supporting protocol as part of the Internet Protocol Suite that is designed to capture success/failure type messages when one host is trying to communicate with another host. Just like in the examples we gave above using the ping command line utility. The reality is that there are a whole host of protocols that the average technical user has never even heard of unless you’re a network specialist such as: TCP, UDP, ICMP, ESP, AH, GRE, EoIP, IPV6, IGMP, PIM, OSPF, SCTP, CARP, PFSYNC. Most people have heard of TCP, some know of UDP, but the rest most people have never heard of and will never need to know anything about.

In reality, ICMP is used by things that are primarily under-the-hood technologies that most people haven’t ever heard of and/or don’t really care about. The exception being command line tools like ping and tracert/traceroute commands which are designed for humans to be using to help with debugging.

Why all of this is important is because ‘a’ firewall along the journey from source to destination could be blocking ICMP traffic which would cause your ping command to fail, when you would expect it to be working.

 

Understanding the Route of the Network Packets

Now comes the fun part. While things often seem relatively straight forward that Computer 1 wants to talk to Computer 2, the reality is that things under the hood are 100x more complex than this and have so many nuances and company specific configurations that it’s never as straight forward as it should be to debug basic things like this.

Hence why it’s essential that you understand how things are configured under the hood. Unfortunately in my experience in most enterprise organisations, no-one really has a clue how all this actually works and why it works, it just kind of does for the average non-technical user so organisations kind of accept that because it just works, let’s not to prodding it. Ultimately this is an extremely bad thought process to have and when leadership doesn’t question these things, this ultimately causes endless headaches for IT staff doing their daily work because things that should just work often don’t and often require weeks, even months in many cases of time spent on meetings/emails/conversations to ultimately understand what should have really been a 5 minute fix to a problem.

Even worse in some organisations whereby it soon becomes a case of networking/firewall/configuration whack-a-mole whereby people start fiddling with the settings without understanding things fully which solves one problem but creates 10x more which get reported days/weeks later when other things start to break. This stuff is hard, it requires extremely knowledgeable individuals to make informed decisions to configure things well.

Anyhow, back to the main point. You need to understand how things work. How do packets get from Computer 1 to Computer 2 when you are ping’ing the IP address or hostname?

Let’s look at an extremely basic setup and the hops along the way;

  • Source Computer sends ping command to Destination Computer
  • Source Computer outbound firewall – Does it allow outbound ICMP traffic?
  • Network Router/Firewall/Gateway – Does it allow ICMP traffic through from the Source Computer to the Destination Computer?
  • Destination Computer inbound firewall – Does it allow inbound ICMP traffic?

And this is where things get even more challenging, since there can be configurations on Windows such as Public and Private networks that you connect to. Let’s be honest, the average user configuring this on their own machines hasn’t got a clue what to select and just randomly click one of the two options. In reality though, depending on what a user clicks on will depend on how Windows behaves and ultimately in this example if Windows response to ping requests or completely ignores them.

 

Windows Firewall Public and Private Networks

To keep this section focused on the topic at hand, debugging ping requests not working as they should, we’re going to simply take a look at the two common Windows network settings – Public Network and Private Network.

Your device will either have one or both of these options;

  • WiFi Network Interface Card (aka. No cable plugged into device)
  • Ethernet Network Interface Card (aka. Cable plugged into device)

And each of these within your Windows Network and Sharing Centre will either be configured as a Public Network or a Private Network. By default on Windows, if a NIC (Network Interface Card) is configured to be used as a Public Network, then inbound ping requests will be ignored, whereas if the NIC is configured to be used as a Private Network, then inbound ping requests will be responded to.

Rather annoyingly on Windows, once this initial configuration is done (in most cases incorrectly…) then changing a NIC from Public to Private or vice-versa is not as simple as clicking a button – you have to use Windows Powershell. There should be no reason to do this in 2022, but hey, send your complaints (aka. “feature requests”) into Microsoft to solve this one.

For anyone needing to do this, here’s how to change a Windows network from private to public or how to change a Windows network from public to private;

  1. Open Windows PowerShell as an Administrator
  2. Run command:
    1. Get-NetConnectionProfile
    2. Which will show you the network type, i.e. either public or private
  3. Run command:
    1. Set-NetConnectionProfile –Name “{NAME OF YOUR NETWORK}” –NetworkCategory Private
    2. Which will set the name of the network to either Private or Public depending on what you type in the above command
  4. Then when you open your Network and Sharing Centre, you should see that the network has changed from Public/Private to the opposite of what it was previously.

 

Norton 360 Smart Firewall Blocking ICMP Ping Requests

Another one of those annoying “features” is when Windows based firewall software overrides the default Windows firewall, adding another layer of debugging to a problem. In this case, software such as Norton 360 Smart Firewall essentially takes control of the firewall management rather than Windows out of the box. This is where things can get messy.

End user firewall software, regardless of brand, tend to dumb things down to such as level that they often hide the complexity of the details which are often hard to find in the system. Norton 360 is a prime example of this. By default it blocks inbound ICMP traffic, meaning that if you are trying to ping Computer 2 (with Norton 360 on) from Computer 1, then by default pings will fail. You need to allow ICMP traffic within the Norton 360 Smart Firewall settings to ensure that this traffic can pass through to Windows to respond successfully.

You can see here how deep the settings for this are buried within Norton 360 Smart Firewall, it’s almost impossible to find this without knowing exactly what you are looking for, which 99.9% of users simply haven’t got a clue about.

 

The irony being that there is nothing “smart” about this setup, it’s simply fully of dumb assumptions being made in the guise of security. These kinds of configurations are an absolute pain to deal with as they are never well documented out of the box about what is the default configuration, which ultimately results in people digging and digging and digging to get to the root cause of why something that should be working isn’t working. Hey, the joys of IT I guess. It does annoy me though around all these assumptions that are made by software vendors to try and “help”. The best help they could give is write better documentation and/or build their software in a way that caters for different types of users from basic to power users and documentation that helps even the most computer illiterate individual understand what they need to configure. Most companies don’t seem to get this basic concept though, unfortunately.

 

Summary

I hope that this blog post has given you some guidelines and thoughts about how to debug and troubleshoot issues when commands like ping and tracert/traceroute aren’t quite working as expected. What you’ve hopefully picked up is that these things aren’t straight forward to debug.

The best advice I can possible give is to be methodical at every step along the way. Aim to understand every hop along the journey, and confirm for every hop that traffic is successfully leaving the hop and successfully arriving at the next hop along the way.

It’s not straight forward and requires an exceptional understanding of the underlying infrastructure, which often isn’t easily accessible particularly within enterprise organisations. It’s not easy debugging these kinds of things with every hand/arm/limb/sense tied behind your back.