Deploying a Managed Kubernetes Cluster in 15 minutes with IONOS Enterprise Cloud

First, log into the IONOS dashboard using your account credentials

If required you can also create a group which can have the relevant privileges for creating and managing the Kubernetes clusters within the account. Select the User manager from the Manager Resources menu on the banner.

A group can be created and the associated privileges applied for creating Kubernetes clusters.

To start creating a Kubernetes cluster select Kubernetes Manager from the Manager resources menu.

Click ‘Create Cluster’

Provide a name for the cluster and click ‘Create Cluster’

The cluster will be created and should take around 3-5 minutes to create.

Once created the status will change to ‘Green’ and then node pools can be created within the cluster, click ‘Create node pool’ to create anode pool.

Provide a name for the node pool

A node pool will be created within a data center and you will be provided with the ability to either create a new data center or select an existing data center.

With a data center selected, provide the number of nodes that should be provisioned along with the CPU Architecture with a number of cores, RAM quantity and other requirements such as Availability Zone, Storage type and Storage size for the nodes.

A validation request is presented, click OK and the node pool will be provisioned.

To view the status of the provisioning of the node pool the expansion arrow can be clicked and more information is displayed, note that the nodes are being provisioned in the background.

Whilst the nodes are being provisioned you can take the opportunity to download the Kubeconfig file which is used via API access to the cluster using kubectl

The status of the node pool will turn ‘green’ once the node pool is in an available state.

The kubeconfig file can be uploaded to your workstation of choice and then administration of the cluster and node pools can begin. In the example below the kubeconfig is exported and some basic checks made to ensure the cluster is in a running state.

This concludes how easy it is to provision a Kubernetes cluster with the IONOS Enterprise Cloud in around 15 minutes.

Using IONOS Enterprise Cloud S3-Compatible Cloud Storage with Veritas Backup Exec

  
Backup Exec 16 Feature Pack 2 provides S3-compatible cloud storage functionality.  Customers can use IONOS S3-compatible cloud implementation with Backup Exec.  When the configuration process is complete, you can create a storage device within the Backup Exec console that can access mostS3-compatible cloud environments.  S3-compatible environments that are not specifically listed in the Backup Exec 16 Hardware Compatibility List are considered AlternativeConfigurations.  The Backup Exec 16 Hardware Compatibility List definesAlternative Configurations as: 

Configuring IONOS S3-Compatible Cloud Storage with Backup Exec

  
Configuring IONOS S3-compatible cloud storage using the S3 Cloud Connector in Backup Exec 16 FP2 is a two-step process: 


 Create a cloud instance for your cloud – requires pre-configuration of a user account and buckets in the cloud environment.  The cloud location and configuration parameters must be provided to the Backup Exec server by configuring a cloud instance using the Backup Exec Command Line Interface (BEMCLI) (see Creating a Cloud Instance for S3 Compatible Cloud).

Create a cloud storage device – in the Backup Exec console by using the storage device configuration wizard and providing the account credentials that can access the S3-compatible cloud location.  

S3 Cloud Pre-Configuration Requirements

  
In the cloud environment, create an account for Backup Exec read/write access.  The cloud account credentials, known as the server access key ID and secret access key, must be provided in the Backup Exec console to create the storage device. 
  
The cloud environment must also have buckets configured for Backup Exec use.  Buckets represent a logical unit of storage in a cloudenvironment.  As a best practice, create specific buckets to useexclusively with Backup Exec.  Each Backup Exec cloud storage device mustuse a different bucket.  Do not use the same bucket for multiple cloudstorage devices even if these devices are configured on different Backup Execservers
  
Bucket names must meet the following requirements:

  • Can contain lowercase letters, numbers, and dashes (or hyphens)
  • Cannot begin with a dash (or a hyphen)

  
Bucket names that do not comply with the bucket naming convention will not be displayed in the Backup Exec console during storage device configuration. 
   

Creating a Cloud Instance for IONOS S3-Compatible Cloud


To create a custom cloud instance for an S3-compatible cloud storage server use the BEMCLI command “New-BECloudInstance”.  

  
To run BEMCLI on the computer on which Backup Exec is installed you can either 
 

  • Go the taskbar, click Start > All Programs > Veritas Backup Exec > Backup Exec Management Command Line Interface

or

  • Launch PowerShell, and then type Import-Module BEMCLI.


From the BEMCLI command line interface run the New-BECloudInstance command with the required parameters, for example:
New-BECloudInstance -name “IONOS-Enterprise-Cloud” -Provider”compatible-with-s3″ -ServiceHost”s3-de-central.profitbricks.com” -SslMode “Disabled”-HttpPort 80 -HttpsPort 443
 

Mandatory Parameters:

  • Name: Name of the new cloud instance.  Cloud instance name must match BE naming requirement.
    • Instance names can contain letters, numbers, and dashes (or hyphens).
    • Instance names cannot begin with a dash (or a hypen).
  • Provider: Specifies the provider name of the cloud instance.  For s3 the provider name is ‘compatible-with-s3’.
  • ServiceHost: Specifies the service host of the cloud instance. ServiceHost should be unique for each cloud instance that is created on the Backup Exec server.
  • SslMode:  Specifies the SSL mode that Backup Exec will use for communication with the cloud storage server. The valid values are:
  • Disabled: Do not use SSL.
  • AuthenticationOnly: Use SSL for authentication only.
  • Full: Use SSL for authentication and data transfer also.

  
Note: Backup Exec supports only Certificate Authority (CA)-signed certificates while it communicates with cloud storage in the SSL mode. Ensure that the cloud server has CA-signed certificate. If it does not have the CA-signed certificate, data transfer between Backup Exec and cloud provider mayfail in the SSL mode. Users may choose to opt out of SSL and set SSLMode asDisabled. 
  
To confirm the command completed successfully, run the BEMCLI command“Get-BECloudInstance”.  The parameters of the newly configured cloud instance will be displayed.  Ensure that the ServiceHost points to the correct S3-compatible cloud implementation, the provider name is accurate and the SSL mode is set correctly.  If any parameters are not correct, rerun the New-BECloudInstance command with the corrected parameters. 
 

Creating a Cloud Storage Device forS3-Compatible Cloud

  
To configure a storage device for an S3-compatible cloud in Backup Exec:


1. On the Storage tab, in the Configure group, click Configure Cloud Storage

2. Click Cloud storage, and then click Next

3. Enter a name and description for the cloud storage device, and then click Next

4. From the list of cloud storage providers, select S3, and then click Next

5. From the Cloud Storage drop-down, select the name of the instance created with BEMCLI 

6. Click Add/Edit next to the Logon account field. 

7. On the Logon Account Selection dialog box, click Add

8. On the Add Logon Credentials dialog box, do the following:

  • In the User name field, type the cloud account access key ID.
  • In the Password field, type the cloud account secret access key.
  • In the Confirm password field, type the cloud account secret access key again.
  • In the Account name field, type a name for this logon account.

                The Backup Exec user interface displays this name as the cloud storage device name in all storage device options lists.

9. Click OK twice. 

10. Select the cloud logon account that you created in step 8, and then click Next

11. Select a bucket from the list of buckets that are associated with the server name and the logon account details you provided in earlier screens, and then click Next

12. Specify how many write operations can run at the same time on this cloud storage device, and then click Next

13. This setting determines the number of jobs that can run at the same time on this device. The suitable value for this setting may vary depending on your environment and the bandwidth to the cloud storage. You may choose the default value. 

14. Review the configuration summary, and then click Finish. Backup Exec creates a cloud storage device. You must restart Backup Exec services to bring the new device online. 

15. In the window that prompts you to restart the Backup Exec services, click Yes. After services restart, Backup Exec displays the new cloud storage location in the All Storage list. If the S3-compatible cloud environment is not displayed in the Backup Exec storage device configuration wizard or console, use BEMCLI to ensure the parameters for the cloud instance are correct.

Once the S3-compatible cloud storage device is configured in Backup Exec, you can target backup, restore and duplicate jobs to the cloud server.  As a best practice, test backup and restore operations should be completed before running regularly scheduled jobs. Backup Exec Data lifecycle management will automatically delete expired sets from the cloud server.

IONOS Enterprise Cloud – Data Center Designer – Introduction

With the Enterprise Cloud, you receive a modern IaaS platform for cloud computing—highly available, secure, reliable, and with fast software defined networking. This means you receive precisely the virtual IT infrastructure that your company actually needs. The drag and drop feature in our Data Centre Designer allows you to put together the resources for your customised virtual data centre, without any rigid, prefab packages.

Our live vertical scaling gives you the option of flexibly adding new capacities and components to your virtual infrastructure – at any time, on short notice, and without rebooting the system! This is what makes the Enterprise Cloud by 1&1 IONOS one of the most attractive corporate cloud solutions available anywhere on the market.

How to Install and Configure WordPress on CentOS 7

Introduction

WordPress is a free and open source blogging platform or content management system based on PHP and MySQL. Currently WordPress is the most popular CMS all over the world, and has 20000 plus plugins to extend its functionality.You can easily create a simple website, blog or complex portals and enterprise websites using WordPress.

WordPress provides lots of features. Some of them are listed below:

  • WordPress is available in more than 70 languages. So you can build a website in a language as your choice.
  • You can easily manage your content, schedule, look and publication using WordPress, and also secure your posts and content with a password.
  • WordPress comes with thousands of themes for you to create a beautiful website. You can also upload your own theme with the click of a button.
  • With the importers feature you can easily import your blog from another website to WordPress.
  • WordPress provides search engine optimization out of the box, and also provides many SEO plugins.

In this tutorial, we will discuss how to install and configure WordPress on a CentOS 7 server.

Requirements

  • A server running CentOS 7.
  • A non-root user with sudo privilege setup on your server.

Getting Started

Update your system with the latest package versions by running the following command:

sudo yum update -y

Once your system is up-to-date, you can proceed to the next step.

Installing LAMP

Before installing WordPress itself, you will need to install the LAMP stack and other required packages on your server.

You can install all the necessary packages with the following command:

sudo yum install httpd mariadb mariadb-server php php-common php-mysql php-gd php-xml php-mbstring php-mcrypt php-xmlrpc unzip wget -y

Once installation is complete, start the Apache and MariaDB services and enable them to start at boot with the following commands:

sudo systemctl start httpd
sudo systemctl start mariadb
sudo systemctl enable httpd
sudo systemctl enable mariadb

Configuring MariaDB for WordPress

By default MariaDB is not secured, so you will need to secure it first. You can do this by running mysql_secure_installation script:

sudo mysql_secure_installation

Answer all the questions as shown below:

Set root password? [Y/n] n
Remove anonymous users? [Y/n] y
Disallow root login remotely? [Y/n] y
Remove test database and access to it? [Y/n] y
Reload privilege tables now? [Y/n] y

Once you have finished, login to MariaDB console with the following command:

mysql -u root -p

Enter your MariaDB root password and hit Enter. After login, create a database for WordPress:

MariaDB [(none)]>CREATE DATABASE wordpress;
MariaDB [(none)]>GRANT ALL PRIVILEGES on wordpress.* to 'user'@'localhost' identified by 'password';
MariaDB [(none)]>FLUSH PRIVILEGES;
MariaDB [(none)]>exit

Installing and Configuring WordPress

You can download the latest version of the WordPress source from the official website. You can get the latest version of WordPress by running the following command:

wget http://wordpress.org/latest.tar.gz

Once download is finished, extract the downloaded file with the following command:

tar -xzvf latest.tar.gz

Next, move the extracted files to the Apache web root directory:

sudo cp -avr wordpress/* /var/www/html/
restorecon -r /var/www/html

Next, create a directory for WordPress to store uploaded files:

sudo mkdir /var/www/html/wp-content/uploads

Next, assign proper ownership and permissions to your WordPress files and folders:

sudo chown -R apache:apache /var/www/html/
sudo chmod -R 755 /var/www/html/

Next, you will need to make some changes in the WordPress main configuration file, so it can be connected with the database and user.

First, rename and edit the WordPress main configuration file:

cd /var/www/html/
sudo mv wp-config-sample.php wp-config.php
sudo nano wp-config.php

Change the DB_NAME, DB_USER, and DB_PASSWORD variables as shown below:

define('DB_NAME', 'wordpress');
define('DB_USER', 'user');
define('DB_PASSWORD', 'password');

Save and close the file when you are finished.

Accessing WordPress Web Installation Wizard

Before starting, you will need to allow access to the Apache ports using firewalld.

You can do this by running the following command:

sudo firewall-cmd --permanent --zone=public --add-service=http
sudo firewall-cmd --permanent --zone=public --add-service=https
sudo firewall-cmd --reload

Next, open your web browser and type the URL http://your-server-ip. You should see the following page:

WordPress language selection

Select language as per your need and click on Continue button, you should see the following page:

WordPress site info page

Fill out all the required site information and click on Install WordPress button. You should see the WordPress default dashboard as below:

WordPress dashboard page

Once installation is completed, you can login WordPress by typing the URL http://your-server-ip/wp-login.php? on your web browser. You should see the WordPress login page as below:

WordPress login page

Next, provide username and the password which you have created earlier and click on Log In button, you should see the following page:

WordPress dashboard

Summary

Congratulations! You have successfully installed WordPress on CentOS 7. I hope you have now enough knowledge to host your own WordPress blog easily. Feel free to comment below if you have any questions.

Avoiding Cloud Vendor Lock-in

Always ask before entering into any contract, “How do I get my data out in the future if I need or want to?”

Cloud vendor lock-in is typically a situation which a customer using a product or service cannot easily transition to a competitor. Lock-ins are usually the result of proprietary technologies that are incompatible with those of its competitors and it can also be caused by inefficient processes or constraints among other things.  I’ve seen many customers come up against this in the past with traditional data centers where their storage vendor or hyper-visor solutions locked those customers into fixed solutions which inhibit the customer to be agile in moving to new technologies. The cloud albeit public or private can be no different when it comes to using lock-in techniques for retaining its user base.

Fear of Lock-in

Cloud lock-in is often cited as the major obstacle to cloud service adoption. there are a number of reasons why a company may look to migrate to the cloud, most often its all about reducing the physical infrastructure that they have in their data centers, cloud gives them the agility their look for, additionally reducing not only the CAPEX but also the OPEX required for the ongoing maintenance of the systems.

There’s also the question of how they should migrate to the cloud , the complexities of the migration process may mean that the customer stays with their provider which could also mean there’s a compromise in that their current provider doesn’t meet all their needs and limits the agility of their IT and value it provides to the business. 

In some cases during the migration to another provider it may be required to move the data and services back to the original on-premises location which in itself may be an issue as the original architecture may no longer be available or the data center is now reduced in resource availability and prohibits such an action. Further more the data may of been changed to allow its operation on a particular cloud vendors platform and would need to be altered again to run on an alternative cloud platform. 

Cloud vendor lock-in

Its only natural that cloud vendors want to lock you in after all they’re there to make money and need you to stay with them, they work at ways to keep you using their services and try to ensure that migrations are not an easy task. their customers often don’t know the impact until they try to migrate and can be devastating when it happens. Due to these challenges migration services from third party vendors are becoming a common occurrence and turning into lucrative business.

Taking the leap

Most companies I’ve talked to recently have similar experiences when looking to migrate from their current cloud vendors, the majority were unhappy with the perceived costs of using cloud infrastructure after all cloud was suppose to be cheap but the ROI was taking longer than first anticipated. The cloud vendors support services were a close second due to the lack of any personal experience offered from their vendor, i guess there’s only a number of times that “Take a look at this FAQ” is going to help.

One of the other major problem with cloud vendors is that you typically need to over allocate already inflated resources to the services you are providing as cloud resources are most of the time shared with other users of their services. its a bit like a house share, the last thing you need is someone hogging the bathroom.

PaaS services were also another reason, whilst PaaS is great in reducing the OPEX of the underlying infrastructure and application or database services it does start to get expensive with large number of API gateway calls which if unplanned for can be a bit of a surprise when you get your invoice, add to that one clouds PaaS may not be inter-operable with another so some type of data cleaning is going to be needed.

GDPR (there I’ve said it) was another reason which raised its head especially if the vendor was US based then the C.L.O.U.D. Act comes into effect.

https://docs.house.gov/billsthisweek/20180319/BILLS-115SAHR1625-RCP115-66.pdf#page=2201

If your using a US based provider then your data is no longer private as is can be handed over to the US government if they deem any suspect need to, oh and hosting in a different region outside of the US doesn’t help either so using a Irish region will not allow you to escape the act. The last time I check the big 3 public clouds are all US owned but if you believe that this may not effect you then you don’t need to look too far to see it in action, I’m sure we all remember Cambridge Analytica and the Facebook debacle that company had to hand over its data and now no longer exists! Taking up a hybrid cloud approach and using a dedicated European provider with multiple region support will help avoid this.   

One company that I spoke to had a concerning case in that their cloud vendor had no export facility for the data and had challenges on how to cleanly extract the data, this challenge was compounded even more as the tax man also called in an audit on their accounts during the migration phase and had to take a hit on a penalty as the accounts were not available at the time of the audit. The whole process was painful and time consuming and they surely learnt a lot from the experience.

And the moral of the story is …..

Ask the important questions, “How is the data securely stored?”, “Who has access to my data”, “How is my data protected?”, “Do I need to modify my data so the cloud vendor can store it?” and most importantly “How do I get my data out in the future if I need and want to?” In most cases getting your data out is going to cost you but knowing that’s its possible is half the battle. if your new provider has tools to make it easier for you then that’s even better.

And lastly

Be aware of the existence of the CLOUD Act and its potential implications for your business.
Adopt a hybrid cloud strategy, which clearly defines which data can be stored in public cloud services, and what should be stored in data centers operated by European managed service operators.
If you have large amounts of customer data, and would like to alert them if you do get a request to hand over personal data under the CLOUD Act, you might want to consider adding a warrant canary clause on your website.

Comparing Public Cloud Performance – Part Three – GCP

In the first series on this post I looked at Azure VMs and provided a comparison with IONOS Enterprise Cloud, in the second part we looked at AWS, this final post of the series will look at comparing Google Cloud Platform (GCP).

As a bit of background in case you haven’t read the first or second parts yet, I’ve been working with the major cloud vendors for some years now and for me performance has always been a key factor when choosing the right platform, I’ve always struggled in finding the right balance of cost vs performance when choosing the right platforms and have created this blog to highlight some of the differences.

I’ve just started a new role as Cloud Architect for 1&1 IONOS Enterprise cloud and one of the main factors in coming here was the technology and some of the claims that it makes especially with performance and simplicity. This blog will highlight those performance claims and also the cost benefit that choosing the right cloud provider will be for you.

For these tests I’ve kept it simple, I’m using small instances that will host microservices so cost is one variable but performance is another, I will be creating an instance with 1 vCPU and 2Gb RAM, this system will be a base line for testing and I will use Novabench (novabench.co.uk) for some basic CPU and RAM performance modelling. There are so many tools out there but I find this one real quick and simple to test against some key attributes I will also be using the same tool for the instances so not unbiased results too.

So on with the comparison and next up is GCP for this I’ve selected a custom VM size as this is as near as consistent with other instances on the clouds I have been testing, The CPU used is an Intel Xeon 2.3Ghz and the price for this including windows server licensing and support costs comes out at £50.64 per month


GCP Pricing calculator for Custom VM

For IONOS Enterprise Cloud I’ve also selected a similar spec as GCP which is a 1 CPU and 2Gb RAM and have used the Intel Haswell E5-2660v3 based chip for the OS as this will be as close to the custom VM in GCP, Like GCP I’ve also included the Windows Server license cost in the subscription along with 24/7 support which is actually free. The monthly cost for this server is £59.18 so comparing costs of using IONOS Enterprise Cloud there is a slight benefit of using GCP as you would save £102.48 over the year, so looks like GCP has a cost edge over IONOS, so what about the performance.


IONOS Enterprise Cloud Pricing for GCP 1 CPU 2Gb RAM equivalent

First I wanted to see how the external and internal internet connectivity was performing, to no big surprise IONOS way out performed Azure by a factor of 2, which is to be expected given the infrastructure back end design running on InfiniBand and the datacentre interconnects. The download speed was comparable for Google which you would expect from the internet giant.

GCP Speedtest performance rating

IONOS Enterprise Cloud Speedtest performance rating

Next the focus turned to CPU, RAM and disk performance for this I ran the Novabench performance utility and performed tests on both servers, the tests did throw up some major differences between the two. Let’s take a look at GCP first

GCP custom 1 vCPU & 2GB Ram VM Novabench Results

The GCP results were interesting to a point that twice as much resources are to be required to get to the same level of the IONOS instance.The GCP instance had a more or less half that of a score for its CPU, RAM and Disk benchmark compared to IONOS but it must be noted that the GCP resources are shared resources instances being hosted on GCP, the RAM score was also at a much lower throughput with a difference of 11964 MB/s, but what was noticeable was that the disk read and write performance was half that of IONOS. the write speed was not what would be expected from SSD storage.

The IONOS Enterprise cloud exhibited near twice the values from the results to GCP.

IONOS Instance Novabench result

Conclusion

Due to the dedicated resources that are used by IONOS Enterprise Cloud it becomes apparent that other Public Cloud vendors have to double (GCP & AWS) or even quadruple (Azure) their resource configurations to be comparable in performance to IONOS. Comparing GCP to IONOS to catch up to a similar performance of that of IONOS Enterprise Cloud the GCP instance would need to be reconfigured to a custom VM with 2 vCPUs and 4Gb RAM size this is 2 times the resources of the IONOS Instance which would increase the monthly cost to £94.57 which would equate to £1134.84 for the year of which you would have to pay an extra £423.96 per year for an equal performance instance of that of the IONOS instance.

GCP custom 2 vCPU& 4GB Ram VM Novabench Results

Can you really justify that type of expense of spending an additional £400 per year for just one system for the same performance? IONOS Enterprise Cloud provides dedicated CPU and Memory and is surely the way to go.

Don’t just take my word for it, give it a go yourself, I’m sure you’ll be impressed with the results.

Get your free 30 day no obligation trial at https://www.ionos.co.uk/pro/enterprise-cloud/

Nagios Core Upgrade on CentOS 7

We are going to upgrade Nagios Core from 4.1.1 to 4.4.2.

Backup Existing Nagios Configuration
Nagios and Apache services should be stopped:

systemctl stop nagios httpd

Make sure that we have a backup:

rsync -rav /usr/local/nagios/ /opt/nagios411backup/

Upgade and Configuration
Download Nagios Core release 4.3.4 and extract the archive:

wget https://assets.nagios.com/downloads/nagioscore/releases/nagios-4.4.2.tar.gz
tar xf ./nagios-4.4.2.tar.gz && cd ./nagios-4.4.2

Configure and compile:

./configure --with-command-group=nagcmd
make all

Install the main program, CGIs, HTML files, sample config files etc:

make install
make install-init
make install-commandmode
make install-config
make install-webconf
make install-webconfig

Restore the configuration file nagios.cfg from the backup:

cp -f /opt/nagios411backup/etc/nagios.cfg /usr/local/nagios/etc/

Restore the password file htpasswd.users if required:

cp -f /opt/nagios411backup/etc/htpasswd.users /usr/local/nagios/etc/

Restore objects:

rsync -rav /opt/nagios411backup/etc/objects/ /usr/local/nagios/etc/objects/

In our case we also want to restore all custom monitoring configuration files:


rsync -rav /opt/nagios411backup/etc/monitoring/ /usr/local/nagios/etc/monitoring/

These are deprecated and will be removed in future versions, might as well change them now:

sed -i 's/normal_check_interval/check_interval/g' /usr/local/nagios/etc/objects/templates.cfg
sed -i 's/normal_check_interval/check_interval/g' /usr/local/nagios/etc/objects/printer.cfg
sed -i 's/normal_check_interval/check_interval/g' /usr/local/nagios/etc/objects/switch.cfg
sed -i 's/retry_check_interval/retry_interval/g' /usr/local/nagios/etc/objects/templates.cfg
sed -i 's/retry_check_interval/retry_interval/g' /usr/local/nagios/etc/objects/printer.cfg
sed -i 's/retry_check_interval/retry_interval/g' /usr/local/nagios/etc/objects/switch.cfg
sed -i 's/^command_check_interval/#command_check_intervald/g' /usr/local/nagios/etc/nagios.cfg

We use Nagiosgraph, therefore we need this to continue processing data (the config file which we restored from the backup does contain the line already, therefore it’s mainly for future references).

sed -i 's/process_performance_data=0/process_performance_data=1/g' /usr/local/nagios/etc/nagios.cfg

Reload and restart the services:

systemctl daemon-reload
systemctl restart nagios
systemctl restart httpd

Verify:

/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

Nagios Core 4.4.2
Copyright (c) 2009-present Nagios Core Development Team and Community Contributors
Copyright (c) 1999-2009 Ethan Galstad
Last Modified: 2017-08-24
License: GPL

Website: https://www.nagios.org
Reading configuration data…
Read main config file okay…
Read object config files okay…

Running pre-flight check on configuration data…

Checking objects…
Checked 1671 services.
Checked 190 hosts.
Checked 44 host groups.
Checked 47 service groups.
Checked 5 contacts.
Checked 6 contact groups.
Checked 126 commands.
Checked 7 time periods.
Checked 0 host escalations.
Checked 0 service escalations.
Checking for circular paths…
Checked 190 hosts
Checked 0 service dependencies
Checked 0 host dependencies
Checked 7 timeperiods
Checking global event handlers…
Checking obsessive compulsive processor commands…
Checking misc settings…

Total Warnings: 0
Total Errors: 0

Things look okay – No serious problems were detected during the pre-flight check
If there are any configuration mismatches between the old and the new Nagios versions that affect your set up, then change them accordingly.

How To Install Nagios 4 and Monitor Your Servers on CentOS 7

Introduction

In this tutorial, we will cover the installation of Nagios 4, a very popular open source monitoring system, on CentOS 7 or RHEL 7. We will cover some basic configuration, so you will be able to monitor host resources via the web interface. We will also utilize the Nagios Remote Plugin Executor (NRPE), that will be installed as an agent on remote hosts, to monitor their local resources.

Nagios is useful for keeping an inventory of your servers, and making sure your critical services are up and running. Using a monitoring system, like Nagios, is an essential tool for any production server environment.

Prerequisites

To follow this tutorial, you must have superuser privileges on the CentOS 7 server that will run Nagios. Ideally, you will be using a non-root user with superuser privileges.

A LAMP stack is also required. Follow this tutorial if you need to set that up: How To Install LAMP stack On CentOS 7.

This tutorial assumes that your server has private networking enabled. If it doesn’t, just replace all the references to private IP addresses with public IP addresses.

Now that we have the prerequisites sorted out, let’s move on to getting Nagios 4 installed.

Install Nagios 4

This section will cover how to install Nagios 4 on your monitoring server. You only need to complete this section once.

Install Build Dependencies

Because we are building Nagios Core from source, we must install a few development libraries that will allow us to complete the build.

First, install the required packages:

sudo yum install gcc glibc glibc-common gd gd-devel make net-snmp openssl-devel xinetd unzip

Create Nagios User and Group

We must create a user and group that will run the Nagios process. Create a “nagios” user and “nagcmd” group, then add the user to the group with these commands:

sudo useradd nagios
sudo groupadd nagcmd
sudo usermod -a -G nagcmd nagios

Let’s install Nagios now.

Install Nagios Core

Download the source code for the latest stable release of Nagios Core. Go to the Nagios downloads page, and click the Skip to download link below the form. Copy the link address for the latest stable release so you can download it to your Nagios server.

At the time of this writing, the latest stable release is Nagios 4.1.1. Download it to your home directory with curl:

cd ~
curl -L -O https://assets.nagios.com/downloads/nagioscore/releases/nagios-4.1.1.tar.gz

Extract the Nagios archive with this command:

tar xvf nagios-*.tar.gz

Then change to the extracted directory:

cd nagios-*

Before building Nagios, we must configure it with this command:

./configure --with-command-group=nagcmd 

Now compile Nagios with this command:

make all

Now we can run these make commands to install Nagios, init scripts, and sample configuration files:

sudo make install
sudo make install-commandmode
sudo make install-init
sudo make install-config
sudo make install-webconf

In order to issue external commands via the web interface to Nagios, we must add the web server user, apache, to the nagcmd group:

sudo usermod -G nagcmd apache

Install Nagios Plugins

Find the latest release of Nagios Plugins here: Nagios Plugins Download. Copy the link address for the latest version, and copy the link address so you can download it to your Nagios server.

At the time of this writing, the latest version is Nagios Plugins 2.1.1. Download it to your home directory with curl:

cd ~
curl -L -O http://nagios-plugins.org/download/nagios-plugins-2.1.1.tar.gz

Extract Nagios Plugins archive with this command:

tar xvf nagios-plugins-*.tar.gz

Then change to the extracted directory:

cd nagios-plugins-*

Before building Nagios Plugins, we must configure it. Use this command:

./configure --with-nagios-user=nagios --with-nagios-group=nagios --with-openssl

Now compile Nagios Plugins with this command:

make

Then install it with this command:

sudo make install

Install NRPE

Find the source code for the latest stable release of NRPE at the NRPE downloads page. Download the latest version to your Nagios server.

At the time of this writing, the latest release is 2.15. Download it to your home directory with curl:

cd ~
curl -L -O http://downloads.sourceforge.net/project/nagios/nrpe-2.x/nrpe-2.15/nrpe-2.15.tar.gz

Extract the NRPE archive with this command:

tar xvf nrpe-*.tar.gz

Then change to the extracted directory:

cd nrpe-*

Configure NRPE with these commands:

./configure --enable-command-args --with-nagios-user=nagios --with-nagios-group=nagios --with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib/x86_64-linux-gnu

Now build and install NRPE and its xinetd startup script with these commands:

make all
sudo make install
sudo make install-xinetd
sudo make install-daemon-config

Open the xinetd startup script in an editor:

sudo vi /etc/xinetd.d/nrpe

Modify the only_from line by adding the private IP address of the your Nagios server to the end (substitute in the actual IP address of your server):

only_from = 127.0.0.1 10.132.224.168

Save and exit. Only the Nagios server will be allowed to communicate with NRPE.

Restart the xinetd service to start NRPE:

sudo service xinetd restart

Now that Nagios 4 is installed, we need to configure it.

Configure Nagios

Now let’s perform the initial Nagios configuration. You only need to perform this section once, on your Nagios server.

Organize Nagios Configuration

Open the main Nagios configuration file in your favorite text editor. We’ll use vi to edit the file:

sudo vi /usr/local/nagios/etc/nagios.cfg

Now find an uncomment this line by deleting the #:

#cfg_dir=/usr/local/nagios/etc/servers

Save and exit.

Now create the directory that will store the configuration file for each server that you will monitor:

sudo mkdir /usr/local/nagios/etc/servers

Configure Nagios Contacts

Open the Nagios contacts configuration in your favorite text editor. We’ll use vi to edit the file:

sudo vi /usr/local/nagios/etc/objects/contacts.cfg

Find the email directive, and replace its value (the highlighted part) with your own email address:

email                           [email protected]        ; <<***** CHANGE THIS TO YOUR EMAIL ADDRESS ******

Save and exit.

Configure check_nrpe Command

Let's add a new command to our Nagios configuration:

sudo vi /usr/local/nagios/etc/objects/commands.cfg

Add the following to the end of the file:

define command{
        command_name check_nrpe
        command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$
}

Save and exit. This allows you to use the check_nrpe command in your Nagios service definitions.

Configure Apache

Use htpasswd to create an admin user, called "nagiosadmin", that can access the Nagios web interface:

sudo htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

Enter a password at the prompt. Remember this login, as you will need it to access the Nagios web interface.

Note: If you create a user that is not named "nagiosadmin", you will need to edit /usr/local/nagios/etc/cgi.cfg and change all the "nagiosadmin" references to the user you created.

Nagios is ready to be started. Let's do that, and restart Apache:

sudo systemctl daemon-reload
sudo systemctl start nagios.service
sudo systemctl restart httpd.service

To enable Nagios to start on server boot, run this command:

sudo chkconfig nagios on

Optional: Restrict Access by IP Address

If you want to restrict the IP addresses that can access the Nagios web interface, you will want to edit the Apache configuration file:

sudo vi /etc/httpd/conf.d/nagios.conf

Find and comment the following two lines by adding # symbols in front of them:

Order allow,deny
Allow from all

Then uncomment the following lines, by deleting the # symbols, and add the IP addresses or ranges (space delimited) that you want to allow to in the Allow from line:

#  Order deny,allow
# Deny from all
# Allow from 127.0.0.1

As these lines will appear twice in the configuration file, so you will need to perform these steps once more.

Save and exit.

Now start Nagios and restart Apache to put the change into effect:

sudo systemctl restart nagios.service
sudo systemctl restart httpd.service

Nagios is now running, so let's try and log in.

Accessing the Nagios Web Interface

Open your favorite web browser, and go to your Nagios server (substitute the IP address or hostname for the highlighted part):

http://nagios_server_public_ip/nagios

Because we configured Apache to use htpasswd, you must enter the login credentials that you created earlier. We used "nagiosadmin" as the username:

htaccess Authentication Prompt

After authenticating, you will be see the default Nagios home page. Click on the Hosts link, in the left navigation bar, to see which hosts Nagios is monitoring:

Nagios Hosts Page

As you can see, Nagios is monitoring only "localhost", or itself.

Let's monitor another host with Nagios!

Monitor a CentOS 7 Host with NRPE

In this section, we'll show you how to add a new host to Nagios, so it will be monitored. Repeat this section for each CentOS or RHEL server you wish to monitor.

Note: If you want to monitor an Ubuntu or Debian server, follow the instructions in this link: Monitor an Ubuntu Host with NRPE.

On a server that you want to monitor, install the EPEL repository:

sudo yum install epel-release

Now install Nagios Plugins and NRPE:

Now, let's update the NRPE configuration file. Open it in your favorite editor (we're using vi):

sudo yum install nrpe nagios-plugins-all
sudo vi /etc/nagios/nrpe.cfg

Find the allowed_hosts directive, and add the private IP address of your Nagios server to the comma-delimited list (substitute it in place of the highlighted example):

allowed_hosts=127.0.0.1,10.132.224.168

Save and exit. This configures NRPE to accept requests from your Nagios server, via its private IP address.

Restart NRPE to put the change into effect:

sudo systemctl start nrpe.service
sudo systemctl enable nrpe.service

Once you are done installing and configuring NRPE on the hosts that you want to monitor, you will have to add these hosts to your Nagios server configuration before it will start monitoring them.

Add Host to Nagios Configuration

On your Nagios server, create a new configuration file for each of the remote hosts that you want to monitor in /usr/local/nagios/etc/servers/. Replace the highlighted word, "yourhost", with the name of your host:

sudo vi /usr/local/nagios/etc/servers/yourhost.cfg

Add in the following host definition, replacing the host_name value with your remote hostname ("web-1" in the example), the alias value with a description of the host, and the address value with the private IP address of the remote host:

define host {
        use                             linux-server
        host_name                       yourhost
        alias                           My first Apache server
        address                         10.132.234.52
        max_check_attempts              5
        check_period                    24x7
        notification_interval           30
        notification_period             24x7
}

With the configuration file above, Nagios will only monitor if the host is up or down. If this is sufficient for you, save and exit then restart Nagios. If you want to monitor particular services, read on.

Add any of these service blocks for services you want to monitor. Note that the value of check_command determines what will be monitored, including status threshold values. Here are some examples that you can add to your host's configuration file:

Ping:

define service {
        use                             generic-service
        host_name                       yourhost
        service_description             PING
        check_command                   check_ping!100.0,20%!500.0,60%
}

SSH (notifications_enabled set to 0 disables notifications for a service):

define service {
        use                             generic-service
        host_name                       yourhost
        service_description             SSH
        check_command                   check_ssh
        notifications_enabled           0
}

If you're not sure what use generic-service means, it is simply inheriting the values of a service template called "generic-service" that is defined by default.

Now save and quit. Reload your Nagios configuration to put any changes into effect:

sudo systemctl reload nagios.service

Once you are done configuring Nagios to monitor all of your remote hosts, you should be set. Be sure to access your Nagios web interface, and check out the Services page to see all of your monitored hosts and services:

Nagios Services Page

Conclusion

Now that you monitoring your hosts and some of their services, you might want to spend some time to figure out which services are critical to you, so you can start monitoring those. You may also want to set up notifications so, for example, you receive an email when your disk utilization reaches a warning or critical threshold or your main website is down, so you can resolve the situation promptly or before a problem even occurs.

Configure LAMP on Centos 7

Installing LAMP

To configure your Centos server with LAMP (Linux, Apache, MySQL and PHP) and other required packages on your server.

You can install all the necessary packages with the following command:

sudo yum install httpd mariadb mariadb-server php php-common php-mysql php-gd php-xml php-mbstring php-mcrypt php-xmlrpc unzip wget -y

Once installation is complete, start the Apache and MariaDB services and enable them to start at boot with the following commands:

sudo systemctl start httpd
sudo systemctl start mariadb
sudo systemctl enable httpd
sudo systemctl enable mariadb

Configuring MariaDB for your application

By default MariaDB is not secured, so you will need to secure it first. You can do this by running mysql_secure_installation script:

sudo mysql_secure_installation

Answer all the questions as shown below:

Set root password? [Y/n] n
Remove anonymous users? [Y/n] y
Disallow root login remotely? [Y/n] y
Remove test database and access to it? [Y/n] y
Reload privilege tables now? [Y/n] y

Your Centos host is now ready for your application and is configured as a LAMP server

Comparing Public Cloud Performance – Part Two – AWS

In the first series on this post I looked at Azure VMs and provided a comparison with IONOS Enterprise Cloud, this next part will focus on AWS.
As a bit of background in case you haven’t read the first part yet, I’ve been working with the major cloud vendors for some years now and for me performance has always been a key factor when choosing the right platform, I’ve always struggled in finding the right balance of cost vs configuration when choosing the right platforms and have created this blog to highlight some of the differences.
I’ve just started a new role as Cloud Architect for 1&1 IONOS Enterprise cloud and one of the main factors in coming here was the technology and some of the claims that it makes especially with performance and simplicity. This blog will highlight those performance claims and also the cost benefit that choosing the right cloud provider will be for you.
For these test I’ve kept it simple, I’m using a small instances that will host microservices so cost is one variable but performance is another, I will be creating an instance with 1 vCPU and 2Gb RAM this system will be a base line for testing and I will use Novabench (novabench.co.uk) for some basic CPU and RAM performance modelling. There are so many tools out there and I find this one real quick and simple to test again some key attributes also using the same tool for the instances will show unbiased results too.
So on with the comparison and next up is AWS, as AWS doesn’t have a 1 CPU and 2GB RAM flavour to choose from I’ve selected the M4_large size as this is as near as consistent with other instances on the clouds I have been testing all be it double that of the IONOS Enterprise Cloud size, the CPU used is an Intel Haswell E5-2660 and the price for this including windows server licensing and support costs comes out at $140.55 per month which equates to £109.22 as calculated by Google currency converter at the time of writing.

2018-11-22_11-27-12AWS Pricing calculator for M4 Large

For IONOS Enterprise Cloud I’ve also selected a slightly reduced spec to AWS and have used the Intel Haswell E5-2660v3 based chip for the OS as this going by my testing should  be very close to the M4 Large instance in AWS, as with AWS I’ve also included the Windows Server license cost in the subscription along with 24/7 support which is actually free. The monthly cost for this server is £50.96 so comparing costs of using IONOS Enterprise Cloud there would be a saving of £699.12 over the year, a saving is a saving so on paper the costs look good so far.

2018-11-20_14-00-37IONOS Enterprise Cloud Pricing

Now what about performance tests between the two?  First I wanted to see how the external and internal internet connectivity was performing, to no big surprise IONOS way out performed AWS by a factor of 2, which is to be expected given the infrastructure backend design running on InfiniBand and the datacentre interconnects.

2018-11-22_10-10-15AWS Speedtest performance rating

2018-11-22_10-40-14IONOS Enterprise Cloud Speedtest performance rating

Next the focus turned to CPU, RAM and disk performance for this I ran the Novabench performance utility and performed tests on both servers, the tests did throw up some major differences between the two. Let’s take a look at AWS first

2018-11-22_10-17-07AWS M4 Large Instance Novabench Results

The AWS results were interesting to a point that twice as much resources were required to get to the same level of the IONOS instance. The AWS instance had a more or less equal score for its CPU, RAM and Disk benchmark but it must be noted that the AWS resources are shared resources instances being hosted on AWS, the RAM score was also at a lower throughput with a difference of  5733 MB/s, but what was noticeable was that the disk read and write performance was half that of IONOS.
The IONOS Enterprise cloud exhibited similar results to AWS but consumed half the resources.

2018-11-20_12-37-19IONOS Instance Novabench result

Conclusion
Due to the dedicated resources that are used by IONOS Enterprise Cloud it becomes apparent that other Public Cloud vendors have to double (AWS & Google) or even quadruple (Azure) their resource configurations to be comparable in performance to IONOS. When comparing AWS to IONOS to get to similar performance of that of IONOS Enterprise Cloud the AWS instance would need to be reconfigured by a factor of 2 which would increase the monthly cost to $140.55 or £109.22 which would equate to £1310.64 for the year of which £700.12 would be for the cost of an equal performance instance of that of the IONOS instance, don’t forget this is for a single system so once you’re deploying 100s or 1000s of instances that soon racks up.
Can you really justify that type of expense of spending an additional £700 per year for one system for the same performance? IONOS Enterprise Cloud provides dedicated CPU and Memory and is surely the way to go.
Get your free 30 day no obligation trial at https://www.ionos.co.uk/pro/enterprise-cloud/