Internet is slow but why?

Recently, I have encountered an issue from one of my client ISP in Thailand where they are using our NAT product. The conversation start like.

“Well OK, What’s the problem sir?” I asked.

“Our client reported to us that their internet connection is slow. Can you identify what is the main cause of the problem?” He said

“Before we using your product client got their own public IP address and there were no issue like this before” He added

“I will sir” I replied right away.

A little background on NAT, NAT stands for Network Address Translation (NAT) where you can assign one public IP for one PC or multiple PC in your private network to be able to access the internet. A good easy explanation can be found in this site https://whatismyipaddress.com/nat or long read on RFC2663

So, in this case our ISP (Internet Service Provider) conserve public IP by using NAT. Their client thus sharing IP together to access the internet.

Life would be easy if that was all but nope. Their client sharing IP as well as sharing TCP/UDP ports (64,511 port which is 65,535-1024 ports). The problem is TCP/UDP ports each user is sharing but why is it a problem? Let’s dig in!!


Photo from https://www.drivers.com/update/pc-fix-tips/10-tips-to-a-faster-pc/

Above picture might seem familiar to you when you’re trying to access view YouTube video, Playing online games, viewing Live stream or anything external to your own network that need internet.

In micro-level this happens to you as individual but in macro-level like your ISP this is port’s allocation problem. As I said earlier, NAT makes user sharing IP and sharing TCP/UDP ports. If ISP setting is 1 IP address per 40 users, which mean 64,511 ports would be share among 40 users. The things is if one user use 5,000 TCP/UDP ports, there would be 59,511 for 39 other users. No problem right? Imagine, all 40 users want to use 5,000 port at the same time. What would happen? There will be no port available. 

Typically, normal user would use not more than 200 TCP and UDP ports. I set up a lap for NAT using A10 virtual machine to demonstrate how much TCP/UDP port on one user surfing internet such as Facebook, YouTube, and browsing web. The result is pretty straight forward. TCP/UDP ports used are incredible low. 

It shows the use of only 3 UDP ports and 65 TCP ports. Session is not related to the resource of the port because you can have multiple sessions for each TCP or UDP port.
I figured if I tested by playing online game TCP/UDP ports would bump up much higher.
However, the result of one user looks normal but when I compare to my client’s environment, I see something strange.  Top TCP and UDP ports’ users were up to the maximum configured 2,500 ports, I have applied. How come one user can used up to 2,497 TCP ports?

What happen was some user might get infected by malware and caused them to use more ports either by forces or hacker exploiting them. There are many malware known in today’s world. Take a peek at some of them in this link https://www.veracode.com/blog/2012/10/common-malware-types-cybersecurity-101

I found some of the most common DDoS attack. If your computer is infected, you might become botnet and control by hacker which use your computer to carry an attack you didn’t know. Cyber war is happening every day. You may not realize. Looks at the ThreatMap here from our supplier Fortinet https://threatmap.fortiguard.com/ which show a pretty cool geographic map of an real time attack.

In the end, when you experience slow internet uses, check yourself first that you’re guard with Anti-Malware and your Firewall is turn on. It is crucial that no one is exploiting your device and slowing you down. You pay what you want and you will get what you want when you secure yourself.

โพสท์ใน sysadmin, system, Technology, web, ไม่มีหมวดหมู่ | ใส่ความเห็น

What is Entropy? How did it fail my web server?

Prologue

Like many companies, some of Comnet’s internal tools being used were built since the stone age, we do continually updating the databases however barely touch the application software itself. Ironically, we are strong believers in the ‘if it ain’t broke, don’t fix it’ concept, so these legacy internal applications may continue to live to the next decade! Another irony is we are too busy serving clients and customers to their satisfaction according to — serve the customer first — concept!

Commitment to excellence

photo from flickr by Roland Tanglao (https://www.flickr.com/photos/roland/)

Anyway, let’s go to my problem, yesterday our Apache2 (version: 2.0.55, built on Jul 26th, 2006) running on an ancient Ubuntu (6.06.1 LTS) virtual machine failed to start properly. The symptom was that nobody could access HTTP service after a long power outage during the weekend. As a good citizen, I logged in to the server, checked that Apache2 process had started then performed ‘netstat -a’ (or ‘ss’ if you like) and found port 80 had been occupied by Apache2. Everything looked quite normal, so, I restarted the service (had to use the old ‘/etc/init.d/apache2 restart’) to make sure there had been no hiccups or errors in the previous start, yet after restart, Apache2 still could not provide the HTTP service (it even could not completely start). I also noticed that there was a single ‘/usr/sbin/apache2 -k start’ entry when ‘ps ax’, instead of multiple pre-forked processes as usual.

I took a look at /var/log/apache2/error.log and found a message

[notice] Digest: generating secret for digest authentication ...

After few the Ubuntu and apache2 restarts (I felt so dumb doing these!), the log showed multiple lines of the same message

[notice] Digest: generating secret for digest authentication ...
[notice] Digest: generating secret for digest authentication ...
[notice] Digest: generating secret for digest authentication ...

After the, yet another dumb, restart the message became

[notice] Digest: generating secret for digest authentication ...
[notice] Digest: done
[notice] Apache/2.0.55 (Ubuntu) DAV/2 SVN/1.3.1 PHP/5.1.2 configured -- resuming normal operations

And the service now ‘resuming normal operations’. What???

Curiosity kills Cat

photo from flickr by Kenny Louie https://www.flickr.com/photos/kwl/

What just happened?

To kill my curiosity, I started to search for the cause using the (sort of) error message as the keyword and there were a lot of discussions about similar error, however, one from Linux Administrator, a WordPress site with URL ‘https://linadmin.wordpress.com/2009/06/22/apache2-hangs-with-digest-generating-secret-for-digest-authentication/‘ or will be called article1 from now, exactly matched the problem I had. The article mentioned that he has SSL enabled while in my case SSL was set to disable. The article goes on to mention that Linux /dev/random uses the entropy collected from the environment such as a keyboard, audio etc.

The author further concludes that due to the lack of entropy in the system caused the /dev/random to block (do not return value to the software calling it) and hence /dev/urandom that never block should be used instead to ensure that Apache2 will not pause during the start.

Wait, what is the entropy?

Entropy generally refers to the level of chaos or randomness of a system and most of us learn this word from basic thermodynamics class. Entropy has similar meaning when in information technology word and typically refer to as Information Entropy which (from Wikipedia) is defined as the average amount of information produced by a stochastic source of data.

In a computing system, randomness has a lot of applications especially in the security arena, for instance, cryptographic related applications (SSL/TLS, encryption etc.) or to prevent guessing of certain information (increase unpredictability) such as TCP initial sequence number (ISN) or ASLR (Address Space Layout Randomization). Linux provides random numbers to applications or systems that need them through the /dev/random and /dev/urandom device drivers.

Random numbers can be generated using Pseudorandom Number Generator algorithm which can provide sufficient randomness for most applications, however, to further strengthen unpredictability and randomness, Linux system is designed to collect randomness value from the environment, for example, inter-keyboard activity, audio noise or inter-interrupt timing, then converts them to digital values and stores them in ‘entropy pool’. The /dev/random, the user interface for random number generation in Linux kernel, fetches and transforms data in entropy pool and return the final information to the application that requested the random bits. For /dev/random, entropy will be depleted at the same amount as one it gives to applications and to maintain the level of non-deterministic, /dev/random will wait if there are insufficient entropy bits to supply to the application. The ‘entropy_avail’ is the (estimated) counter that reports the number of bits available in the ‘entropy pool’ and can be one source for Linux administrator to understand certain problems that may look mysterious.

Applying theory to my problem!

In my case, the situation is a bit different, the SSL was disabled yet I still experiencing the problem. And after looking at the source code of mod_auth_digest.c, the same version running on my Ubuntu 6, it became clear that the ‘initialize_secret()’ needs to access random number from the system, and by default, the /dev/random will be the source. Hence if the system ran out of entropy then /dev/random will block and, as the result, mod_auth_digest also was blocked, the final result is Apache2 initialization routine could not get pass mod_auth_digest and could not complete providing service! When looking at the entropy availability via

# sudo cat /proc/sys/kernel/random/entropy_avail

I found that the system only has less than 100 bit available (skimming through mod_auth_digest source code, it needs 20-bytes (160 bits) from /dev/random to initialize the module). Somehow, eventually, Apache2 completed the initialization phase albeit taking longer time than usual. This situation should be acceptable if apache2 had not need to start/restart during office hours and everybody was waiting for the service!

I did some more research on the topic and found that running old version of Linux on virtual machine (ESXi in my case) has potential to experience the problem more often than on the real hardware as the virtual machine may not be able to collect entropy as fast as the system and applications need (imagine that the virtual machine hypervisor will not generate audio and other noises Linux collects as entropy that real machine can). This is especially true if /dev/random is used because /dev/random generates one bit of random digit at the expense of a single bit of entropy and will block if insufficient entropy bit is not available. On the contrary, /dev/urandom will continue to generate random number even if there are not enough entropy bits left.

So, it becomes obvious that /dev/urandom should be used instead of /dev/random, and in general, the random number generated by /dev/random and /dev/urandom will have similar quality as they are based on the same CSPRNG (Cryptographically Secure Pseudo Random Number Generator) algorithm used in kernel (was SHA-1 and more recent kernel uses ChaCha20 instead) . See http://www.2uo.de/myths-about-urandom/ for more information. In order to use /dev/urandom instead of /dev/random, rng-tools is needed, please see reference 1 on how to achieve this. Another option if you insist to use /dev/random is to have Entropy generation daemon such as Haveged or Timer Entropy Daemon if rng-tools is not available to you.

How much entropy_avail is sufficient?

Example entropy_avail variation on a system

figure from stackoverflow.com asked by techraf (https://security.stackexchange.com/questions/126875/whats-eating-my-entropy-or-what-does-entropy-avail-really-show)

I need to find out the answer to the next natural question – how many bits should there be in the entropy pool, and who needs these random numbers? It turns out random numbers being used quite a lot in the Linux system e.g. TCP/IP ISN (Initial Sequence Number) by the kernel, for message ID in some mail system, for key management in SSH, for SSL/TLS or for session cookie in some PHP/Web applications and a lot more. The number of bits in entropy pool to be used also depend on the implementation of programming languages and how application software implements random number generation, for example, mod_auth_digest (the version I use) needs (and takes away from entropy pool) 20-bytes or 160-bits. Entropy pool size (full) is 4096-bit or 512-bytes [[ NOTE: size is adjustable using: sysctl -w kernel.random.poolsize=xxxx, the default is 4096 ]] and should not get below 200-bits (or >= 1024-bits if possible), entropy_avail in the range of 100s will potentially cause a delay for certain applications and operations due to blocking of /dev/random.

Epilogue

I decided to install ‘haveged’ to boost the entropy pool and immediately after starting the haveged, entropy_avail becomes 4096-bits (albeit only once and never again!) from 30-bits before installation. Living with this Ubuntu VM should be much easier now!

If you are interested in Linux /dev/random and PRNG, reference [6] should be a good read then follow with the one in Wikipedia (reference [5]).

References

  1. Linux Administration Site (https://linadmin.wordpress.com/2009/06/22/apache2-hangs-with-digest-generating-secret-for-digest-authentication/)
  2. Myths about /dev/urandom by Thomas Hühn (http://www.2uo.de/myths-about-urandom)
  3. Haveged – a simple entropy daemon (http://www.issihosts.com/haveged/)
  4. Timer entropy daemon (https://www.vanheusden.com/te/)
  5. Wikipedia on /dev/random (https://en.wikipedia.org/wiki//dev/random)
  6. Linux Pseudorandom Number Generator Revisited (https://eprint.iacr.org/2012/251.pdf)
โพสท์ใน operation, server, sysadmin, system, Technology, web, ไม่มีหมวดหมู่ | ติดป้ายกำกับ , , , , , | ใส่ความเห็น

The ABC of Google Apps Script

There are cases where we want to do some automation on the Internet, says, checking availability of our website or creating a temporary URL that redirect to another site for testing, to accomplish such simple tasks we usually set up a server at our office or use cloud platform such as Digital Ocean, and write some python script. The process was tedious due to the need to set up the hardware (or cloud server), Linux servers, hardening them,  and after a few days of use, remove them from the Internet. All these tedious tasks are gone after I found the Google Apps Script (and Google App Engine).

With Google Apps Script, which everybody with Gmail has access, such simple tasks become really simple, no servers setup needed, I can go straight to writing the script that looks similar to (if it is not) javascript. With a short time to get familiar with the online editor, utilities, and services offered by Google, you are now ready to programming the Internet at your heart desire. Google Apps Script (GAS) allows you to access and automate (up to certain level)  Gmail, Google Calendar, Google Forms, Google Drive, and several other Google services, with ease. The GAS also let you write Web application using HTML services without learning web servers such as Apache or NGINX.

Besides scripting language and services, you can also set up the trigger so that your script can be executed periodically, at particular date and time, or when a Google Docs or Spreadsheet is open, or when a Google Form is submitted. You can set up a notification via email if there are errors in your script which help you to take appropriate actions when a certain event occurs.

For example, I set up a script to monitor Comnet website and if the website goes down, the script will generate and send the error to me so I can start fixing the problem asap.

GAS sent email to me immediately after the execution failure with a sufficiently clear error message including date and time the error occurred that I can understand and start fixing the problem.

GAS online editor can be accessed via ‘https://script.google.com’ and there are a lot of documents at ‘https://developers.google.com/apps-script/’ and many of them contain a simple example for you to get started.

The figure below illustrates how easy to create a website checker that periodically runs and sends errors when found one

.

Setting up trigger is equally simple, just click the Clock icon and you will be ready for a trigger. Multiple triggers can be created for the script e.g. different time trigger for the different function.

If you want a more advanced capability such as a Redirect Server, Please take a look at https://www.maxlaumeister.com/blog/how-to-use-google-app-engine-as-a-free-redirect-server/ for an idea.

โพสท์ใน Automation, Technology | ติดป้ายกำกับ , , | ใส่ความเห็น

How to upgrade Red Hat Linux 6.9 to 7.4

How to upgrade Red Hat Linux 6.9 to 7.4

Red Hat is a one of Linux distribution among many others such as Ubuntu, CentOS, Fedora, and others. Many servers around the world use Red Hat to run their server.

I have recently had to do an upgrade to one of our clients from Red Hat Linux 6.8 to 7.4 and I would like to show you how I have set up a lap to test the tools to upgrade. I recommend you to duplicate the environment of production server for lab testing.

Please note the following below

·   This guide aims to show you the tools Red Hat has given us to upgrade from version 6 to version 7

·    The environment is only on VM which is not reflect the actual environment of our client, therefore, there could be some different outcome when trying to upgrade on production server!!!

·    I assume you can install Red Hat on your VM which could be Virtual Box, VMware or any other VM application you familiar with. (Mine is Virtual Box).

Precaution: Please backup your system before running the upgrade to in case anything happens and you might need to fresh install Red Hat 7.4 or roll back to Red Hat 6.9

1. First thing first, check your current Red Hat version. Mine was Red Hat 6.9

2. You should be sure to update your Red Hat 6 to the latest version before attempt the preupgrade tools. So, do ‘yum update’ to update to latest Red Hat 6.

If the error shows as below. These require an internet connection to connect to Red Hat server.

Be sure to register your subscription carefully again. I have found that for the system that runs for a long time like my client. I have to unregister and register again for ‘yum update’ to properly run.

Refer to thread on https://access.redhat.com/discussions/3066851?tour=8

3. For some system, the error might show something like ‘It is registered but cannot get update’ The methods are the same try to run these command in sequence. Sometimes you don’t have to unregister, only refresh and attach –auto might do the trick.

sudo subscription-manager remove --all

sudo subscription-manager unregister

sudo subscription-manager clean

Now you can re-register the system, attach the subscriptions

sudo subscription-manager register

sudo subscription-manager refresh

sudo subscription-manager attach --auto

Note that when you unregister, your server is not down, this is only an unregister Red Hat subscription meaning you cannot get any update from them but your server can still be running.

After that, you should now be able to do ‘yum update’ then download the update. At the end, you should see the screen below which mean you can now proceed to upgrade procedure.

4. Then enable your subscription to the repository of Preupgrade Assistant.

Then install the Preupgrade Tools

5. Once you have installed everything, run the pre upgrade tool, it should take awhile. This tool will examine every package in the system and determine if there could be any error you need to fix before an upgrade. In my experience, I found solutions to most errors by googling the Internet, but it may not always work for your environment.

After preupg is finished running, please check the file in ‘/root/preupgrade/result.html’ which can view in any browser. You could transfer the file the computer that has browser.

The file result.html will show all necessary information about your system before an upgrade. Basically, if you see information like 5.1 on the screen, you good to go.

See 5.2 for all the result after running the Preupgrade tool, be sure to check them all. I found that some information is just informational but please check the ‘needs_action’ section carefully.

Go down and you will see specific information about result. Check Remediation description for any ‘needs_action’ to perform the suggested instruction.

6. So, you have checked everything from the Preupgrade tool. Now it’s time to start an upgrade.

6.1 Install the upgrade tool

[root@localhost ~]# yum -y install redhat-upgrade-tool

6.2 Disable all active repository

[root@localhost ~]# yum -y install yum-utils

[root@localhost ~]# yum-config-manager --disable \*

Now start an upgrade. I recommend you to save iso file of Red Hat 7.4 to the server then issue the command like below. It’s easier. Although, you could use other option like

--device [DEV]

Device or mount point of mounted install media. If DEV is omitted,

redhat-upgrade-tool will scan all currently-mounted removable devices

(for example USB disks and optical media).

--network RELEASEVER

Online repos.  RELEASEVER will be used to replace $releasever variable

if it occurs in some repo URL.

[root@localhost /]# cd /root/

[root@localhost ~]# ls
anaconda-ks.cfg  install.log.syslog  preupgrade          rhel-server-7.4-x86_64-dvd.iso

install.log      playground          preupgrade-results

[root@localhost ~]# redhat-upgrade-tool --iso rhel-server-7.4-x86_64-dvd.iso

Then reboot

[root@localhost ~]# reboot

7. Upgrade is now completed. Check your version after upgrade!!!! Then don’t forget to check other software and functionality that it runs correctly

[root@localhost ~]# cat /etc/redhat-release

Red Hat Enterprise Linux Server release 7.4 (Maipo)

---------------------------------------------------------------------------------------------------------------------------

I hope the information could guide you through how to upgrade Red Hat Linux 6.9 to 7.4 more or less. I’m also new to Red Hat myself and still have a lot to learn.

Please let me know your experience of upgrade your own Red Hat or if you have any questions, I would try my best to help. Thanks for reading!

Reference Sites

1.     https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/migration_planning_guide/chap-red_hat_enterprise_linux-migration_planning_guide-upgrading

2.     https://access.redhat.com/discussions/3066851?tour=8

โพสท์ใน Technology | ติดป้ายกำกับ , | ใส่ความเห็น

Welcome to Comnet’s Blog

Our team have been blogging and sharing stories over the Internet for some time but have never set up a common repository, may be the right time has come!

Please stay tuned for updates from our team both on the technology and lifestyle in the hope to contribute back to the world after taking a lot from everybody.

โพสท์ใน General | ติดป้ายกำกับ , | ใส่ความเห็น