Upload
mithilesh-shivapuram
View
238
Download
0
Embed Size (px)
Citation preview
7/30/2019 os patching
1/30
NYIT
Fall 2012
TERM PAPER:01
Title: OS Patching, Updating, Upgrading & Core dumpManagement
Name: Shivapuram Mithilesh
Class ID#: 19School ID#: 0837622
Course: Operating Systems Security
Course ID: CSCI-620-M01Date: 1/17/12
7/30/2019 os patching
2/30
Assignments Content
1. Overview.3
2. Patching Linux.4
3. OS UPGRADING18
4.OS UPDATING.22
5.LINUX CORE DUMPS..25
6.CONCLUSION.... 29
7/30/2019 os patching
3/30
1. Overview
With the advent of ever-evolving technologies like SANs, virtualization and
server consolidation, data centers are glowing with even more shimmering lights, and
humming from the buzz of smaller form factor stand-alone servers, farms of virtual
machine servers and rows of blade centers. The ease of using server templates, cloning
and automated installations have definitely had a great impact on the number of servers
you end up managing today. What may have been a 10:1 server to technician ratio several
years ago has now changed, and enterprise sized server farms of several hundred
machines are managed by just a handful of people. Simply put...if you can build it faster,
better and cheaper, someone will take notice and expect more.
Along with the growing data center, you also have the rise of Linux as an enterprise leveloperating system. As more tech houses use this constantly maturing operating system,
they run into issues like support, hardware compatibility and finding ways to get more
bang for their buck using open source components in their existing infrastructures. Now,
aside from finding better ways to manage your hardware, using tools, monitoring
processes and other fun IT stuff, one of the biggest headaches IT has to face is keeping
your machines up to date. Yes...were talking about patching.
When it comes to patching, Microsoft has the edge by far. Regardless of the number of
patches Microsoft puts out every year, being the popular operating system that it is, it
gets pretty good support from the industry when it comes to facilitating patch
management. Aside from using Microsoft Update to patch your machines, there are
plenty of third-party tools that support the Windows operating system. For Linux, on the
other hand, youll only find a few third party tools. You can use the built-in update
processes that the OS has to offer, but it can quite clumsy, especially if scheduling is
required, or if there are package dependencies to consider. The few third party tools
available can be rather limiting as well, since the majority only work with RedHat. You
also need to deal with a vast number of machines in your server farm. How can youmanage large scale patch deployments across thirty, sixty or even several hundred
servers?
In this article, I try to cover some of the basics of patching Linux using built-in
mechanisms, whats available in the third-party tool market and, some of the obstacles
Ive run into when trying to manage a small to large data center full ofLinux servers.
http://www.tomsguide.com/us/security-linux-update,review-1033.htmlhttp://www.tomsguide.com/us/security-linux-update,review-1033.html7/30/2019 os patching
4/30
2. PATCHING LINUX
There are four basic reasons patching your Linux machines are important:
Security
Maintenance
Supportability
Error Fixing
Security
Possibly the most important reason to update your OS is to maintain a secure
environment for your machines applications. Applying security patches will update your
machine and plug up security holes left by outdated software or poorly writtenapplications. You really want to keep others from accessing your file system through
some newly found vulnerability. If someone should get in, that person can possibly get
important data, change security settings on the machine or even install some little piece
of software you may not so easily catch. For example, software like rootkits can be
installed and will use newly added processes, wresting some control from the unwary
administrator. Even more, now that a machine is potentially under the control of
someone else, it may become the unwilling participant in a bot attack involving other
commandeered machines, coming from your network or across the Internet.
There are plenty of ways to keep your machines safe, but most importantly, keepup with the all the latest security alerts. Checking up on updated packages occasionally
can save you from having to deal with the repercussions of having your data stolen or
rebuilding your machine. Vendors and distributors like RedHat, SuSE and Ubuntu have
special alert services and websites that get updated with the latest security news and
information. You can also look up security based web sites like Secunia or the United
States Computer Emergency Readiness Team (US-CERT) for more information on
current vulnerabilities and how theyre affecting other computers in the wild.
Maintenance
Maintaining a solid working environment is the second reason for keeping your machine
up to date. Having the latest/greatest software keeps you up with the times. As we all
know, technology doesnt slow down, and new software features are always popping up.
For example, an applications previous version may have needed an interface to a
MySQL database, but with the advent of a new XML feature, the database requirement
7/30/2019 os patching
5/30
becomes non-existent. By updating your software, you can use the newer XML feature,
and enjoy the benefits of updated technology.
Patching your Linux machine may also present another challenge...dealing with
dependencies. If you patch your OS the wrong way, you may run into dependency
conflicts that, if not resolved, could prevent you from updating your application. Lets
take an application like Gallery, a web based open-source photo album, as an example.
You definitely wouldnt be able to run Gallery with an older mySQL installation on your
computer. Certain requirements would not be met and during the Gallery installation you
would get messages coming back about first having to update other dependent packages.
You would then have to update those dependencies as well for your Gallery installation
to succeed. Theoretically, you could spend quite some time trying to find the appropriate
packages, until you get it all straightened out.
Supportability
If you are going run Linux in an enterprise environment where you have various levels of
expertise on-staff, it is important to make sure that you have your OS at a supportable
level. Sure, Linux may be a free operating system, but if your operations are the type that
support life or manage your companys finances, you need to have access to a high level
of expertise-youll never know for sure if youre going to need it, and while support is
not cheap, its necessary.
To qualify for support from most vendors, if not all, you need to have a supportable
version of the OS to call in for. Just ask yourself this...In 2007, who supports RedHat
Linux 6.0? Running an older version of an OS can potentially be more expensive to
support, as fewer people work with it. Thus, its to your benefit to upgrade that RedHat
server to a newer version, if not the latest. The big Linuxdistributions will usually list
their supported OS levels, and also give end-of-life information so you can know when
you should upgrade those older machines and OSes.
Error Fixing
The last reason for why you want to install newer software packages is to replace
software that is problematic. Memory leaks, for example, are problems caused by errors
that may have been missed during development. Software performance can also be fixed
or improved on a well-maintained machine. Just keep in mind that though most of these
http://www.tomsguide.com/us/security-linux-update,review-1033-2.htmlhttp://www.tomsguide.com/us/security-linux-update,review-1033-2.html7/30/2019 os patching
6/30
updates are listed as optional, but they can also be listed in a critical category iftheir
defects can lead to security holes or other vulnerabilities.
2.1 How to Patch Your Linux Installation
Like all OSes, every once in a while you need to update the software running on yourLinux server. You can do this in one of three ways:
Download the updated packages and manually install them yourself.
Use a built-in open source application that comes with the OS distribution.
Use a third party application that downloads the file and then runs the installation for you.
Lets look at these in more detail.
Manual Updates
One way you can update your RedHat or SuSE machine is by going to your particular
vendors Web or FTP site, and downloading the packages directly from the online file
repository or a trusted mirror site. For recent products, like Novells SLES or RedHat
Enterprise Servers, once you get the file onto your machine you can then run the RedHat
Package Manager (aka rpm) and update the target program you choose.
FIGURE 2.1:RPM
After downloading the rrdtools latest RPM, you can run rpm -i to install the new
package, or rpm -u if you are updating rrdtool. The next RPM command queries all the
installed RPMs, and extracts only the information you want, using the grep command.
The third command uninstalls the rrdtool using rpm -e. Finally, the last line confirms
that the application rrdtool is not installed anymore.
FIGURE 2.2:UPDATE OF RPM
7/30/2019 os patching
7/30
If you have a location of a package available via a URL, you can point RPM to update it
for you. This image shows the update, the confirmation, and the removal of the livna-
release-6-1 package.
In a perfect world, once you run the rpm command, youre done in a moment or two.
Confirmation may be a short message provided by the package installation, or you just
get a command prompt ready for your next Linux command. However, since we dont
live in a perfect world, things can get a little confusing. You may run into a dependency
issue where another package on your machine has to be updated before you can update
your target program.
FIGURE 2.3:FAILED RPM INSTALLATION
A failed RPM installation will generate the above message and not continue the install.
Things can get worse when you realize that these packages that need updates also require
further updates themselves, turning a simple upgrade into a longer exercise of having to
figure out how to deal with all these dependencies and sub-dependencies.
2.2 Built-In Tools
Because manually updating your OS can become pretty complicated, Linux distros come
with applications that do the downloading and installation for you. All it takes is a simple
command line argument and some time, as the application downloads, verifies and
installs your patches. Youll especially need the time if this is a newly installed machine,
or one that hasnt been updated in a very long time, or if you have a slow connection.
One way to shorten your update time is to build your own repository, and synch it with
other mirror sites. That way, if your own repository is on your network, your updates will
not have to cross numerous other ones to get to your Linux machine, significantly
speeding up the patching process. In this case, SuSE Enterprise Linux Server 9 gave you
the option of configuring a YaST Online Update (YOU) server using its YaST
management tool. With a little fiddling around, you can schedule a file synch between
http://www.tomsguide.com/us/security-linux-update,review-1033-4.htmlhttp://www.tomsguide.com/us/security-linux-update,review-1033-4.html7/30/2019 os patching
8/30
your YOU server and the available Novell repositories, to update your machines by
pointing them to your own internal update server.
As things are always changing, newer versions of RedHat and OpenSuSE use an open
source utility called yum (for YellowDog Updater, Modified) which works with RPM
based installations and updates. Yum has been slowly replacing up2date in the
RedHat/Fedora realm, while also being reworked in SuSEs Linux versions. Even though
you can run yum from a command line, various graphical tools have been created to
facilitate the update operation making it easier to keep your machines up to date.
FIGURE 2.4:YUM UPDATER
You can install single or multiple packages using YUM (YellowDog Updater, Modified).
In versions 7, 8 and 9 of SuSEs Enterprise Linux Server, SuSEs main administrative
utility YaST (short for Yet another Setup Tool) has a subcomponent called YaST Online
Update or YOU. As opposed to the YOU server mentioned above, this utility is the client
end of SuSEs patch management tool. The YOU client gives you an ncurses interface or
GUI that downloads and installs RPMs from Novells SuSE portal site using a registered
login. You could also point to other SLES repositories, without having to edit any
configuration files.
Its a pretty straight forward process to run; the only problem Ive encountered with YOU
is remembering to have the update point to the correct repository. If the update fails and
cant find any new packages, it may be because youre using an outdated version of
YaST that lacks the updated server listing. This is usually resolved by updating YOU to
its latest version, which has the updated pointers to newer download sites.
7/30/2019 os patching
9/30
FIGURE 2.4:SuSE
SuSEs YOU (YaST Online Update) is a great GUI-based tool that simplifies package
management and updating.
Another option for updating your newer SuSE Linux Enterprise servers is the commandline interface called rug. It works with the ZenWorks Management Daemon (zmd) and
gives you various command line options. Rug stands out because of a nice feature that
sorts similar software into channels. This helps focus your update installation on the
software you need, without requiring you to install updates that you dont. The use of
ZenWorks with Linux is obviously something that came about after Novells purchase of
SuSE in 2003. A systems administrator can still use YOU to update his server, but you
have to remember to first register your machine through YaSTs built-in module in order
to access the official Novell repositories.
Ubuntu, a Debian based Linux distro, uses a different utility called apt (for Advanced
Package Tool). This Gnu/Linux package management tool predates RedHats RPM
system and also relies on the need to connect to an external repository. In order to get the
latest packages installed on your Ubuntu box, you run the apt-get program; you can then
either specify which package you want to update, or update all updatable applications on
your machine.
http://www.tomsguide.com/us/security-linux-update,review-1033-4.htmlhttp://www.tomsguide.com/us/security-linux-update,review-1033-4.html7/30/2019 os patching
10/30
FIGURE 2.5:Ubuntu updater
Package management in the Ubuntu distro can be done using apt-get. This particular
command updates your list of packages.
FIGURE 2.6: Package management in the Ubuntu
Ubuntu uses the Synaptic Package Manager for its graphical based updates.
2.3 Third Party Tools
Aside from the Linux vendor based utilities like ZenWorks and YaST, there are also third
party commercial applications geared to facilitate the patching process. Some of these
server based applications will download Linux packages from the vendor, store the
updates in a central repository, and provide either automated or manual installation of
packages to waiting Linux servers.
These applications, especially for Linux, are few in number-you can spend a considerable
amount of time trying to find a straight-forward technology that isnt just a small piece of
a larger entire enterprise level package (like ZenWorks). The apps I found mostly patch
http://www.tomsguide.com/us/security-linux-update,review-1033-5.htmlhttp://www.tomsguide.com/us/security-linux-update,review-1033-5.html7/30/2019 os patching
11/30
the Microsoft Windows OS, while a handful of solutions will patch RedHat Linux.
Support for SuSE can also be found, but other Linux distros were pretty much left out,
except in a couple of products.
Patchlink Update is one solution that patches Windows and RedHat systems, but will also
cover SuSE, Solaris, AIX, NetWare and OSX. Seeing that RedHat and SuSE are two of
the dominant Linux distros in enterprise computing, support for these two distributions
puts Patchlink in a very advantageous position over its competitors. Its an agent based
architecture that lets you choose how you want to install your patches-either by a
scheduled or a manual update.
FIGURE 2.7:PATCH LINK
The great thing about Patchlink is that it touches a lot more operating systems than most
update tools.
From a top level perspective, to have one application take care of most of your patching
needs across various operating systems and supporting software is a dream come true.
You get to avoid running multiple patching systems for each OS, saving time and
resources that you can deploy elsewhere.
7/30/2019 os patching
12/30
FIGURE 2.8:PATCHLINK UPDATER
The great thing about Patchlink is that it touches a lot more operating systems than most
update tools.
Optimism aside, Patchlink does have its setbacks. For one, I havent seen any great
performance from its system discovery tool: its slow and takes a while to start up on a
two-year old server. In fact, Ive seen it take up to 10 -15 minutes to get going.
Patchlink, like similar applications, needs a little hand-holding when deploying updates.
One thing Ive learned is that if you are using a scheduling feature that will download,
install and reboot your servers, dont go tell your users that their servers are going to be
automatically rebooted at 10 pm, for example. Patching can be slow, and delays on thenetwork can delay the actual system restart, frustrating both the sys admin and his clients.
The common practice to avoid delayed downtimes is to just patch the machines ahead of
time and then reboot the server.
One other gotcha about Patchlink Update is that its base concept is very security-centric.
Patchlink doesnt take care of the occasional maintenance patches-it only covers security
based updates that fix vulnerabilities when the updated software is available.
Finally, Patchlink does not release kernel updates. Seeing that the kernel is the heart of
your Linux system, this can be a little bothersome. The only solution is to patch the
kernel yourself.
Though not a traditional patch management system, one product line that holds some
promise is BlueLanes set of PatchPoint and Virtual Shield products. These products are
7/30/2019 os patching
13/30
appliance-based gateways that sit between your network and your unpatched servers.
They protect the servers by cleaning up the network traffic going to the protected servers,
and then mitigate any incoming hazardous vulnerabilities.
2.4 Problems with Patching
So you think that patching a Linux server is pretty straight-forward? Youre probably
90% right in that assumption, but there are several issues that need to planned for or
addressed when you are considering running updates on your servers.
Networks: The Bigger They Are...
If youre running a small shop, patching your 5 to 10 servers will probably not be a big
problem. Given a particular span of time, you should be able to have them all updated
before the next big patch cycle. Still, it will take some coordination among your users,
developers and management to minimize the effects of scheduled downtime. Youll have
to help specify how long the machines will be unavailable, have an idea of the number of
reboots involved, and make sure that someone knowledgeable enough with the machine
is ready to test whatever application the machine is running once the patching is done.
Now, take that same process and multiply the number of those machines by 10-or even
20. Big shops will not only have a large number of machines to patch, but the number of
those affected by downtime will surely rise as these enterprise level services will now
affect users numbering in the hundreds You will also have to deal with larger scale
development schedules, and the fun politics that can come from the management and
community.
There is no perfect solution for this: strategies and non-strategies will dictate the tools
youll need to get all these servers patched. Concepts from clustering servers to social
engineering will help get you around the various roadblocks youll encounter as you try
to coordinate and compromise on reboot times ranging across all hours of the day and
night. Youll need to provide assurances about when the systems will come back up,
since the word patching can be synonymous with un-scheduled downtimes due to
either incompatibilities between vendor software and OS updates, or just human error.
Youll also need to prepare for such occasions by having reliable backups available,
prepped fail-over systems that are ready to go if the primary system does nt come back
http://www.tomsguide.com/us/security-linux-update,review-1033-6.htmlhttp://www.tomsguide.com/us/security-linux-update,review-1033-6.html7/30/2019 os patching
14/30
up, and the experience of a well-educated sys admin who can help troubleshoot problem
updates after their installation.
These are just some of the issues I run into when trying to manage a large network of
Linux servers in an enterprise environment. Without the right size staff, a lot this would
be difficult to get done, making the management of a small IT shop look like a piece of
cake by comparison.
Another problem with big shops is that they tend to have a variety of configurations when
it comes to their OS installs. Because of this, its hard to pick and choose what patches
should be applied to your machines-you dont really have the time to sit there and pick
and choose which patches need to be installed first. In response to this dilemma, you may
opt to just install all of the available updates you need. The world would be perfect if wecould have servers running a single upgradeable app, but it just isnt so. Instead, you have
machines running Tomcat or Apache HTTP Server as your primary app, yet you also rely
on the services provided by ntp, openssh or xinetd. These services, as well as many
others, are updated over time, and should be upgraded when given the chance. So, dont
just update that one important app-update them all. It may save you potential headaches
in the future.
The only exception to this patch everything rule is kernel patches. They usually require
a restart of the system, whereas other patches require a daemon restart or no restart at all.
If updating the machine means not having to reboot the server, then thats just one less
thing to worry about, so save the kernel patching for a better time.
Disk Space
One of the problems Ive run into when it comes to patching a Linux OS is disk space.
Hard drive capacity to cost ratios are getting better all the time: the amount of money
spent on a 20 GB hard drive a few years ago will now buy you a 500 GB drive, so its
never a bad idea to have too much disk space. Application logs, home directories, dump
files and third party software will always be there. This requires you to keep track of the
available disk space you have on your machine, though; remember to always have
enough space when you download your patches. Thanks to the large disk capacities of
today, running out of disk space is probably the last thing on your mind, but in the cases
of virtualized machines or older hard drives, capacity can be an issue. If a machine is in
7/30/2019 os patching
15/30
the process of downloading a large number of patches, its going to need the space to
store these packages so that the installation process can access the files and execute the
install.
FIGURE 2.9: contents of the /var/cache/yum
The contents of the /var/cache/yum sub-directories can show lots of valuable space
wasted away holding old RPMs you dont need anymore.
In SuSEs Enterprise Server, for example, all of these files are stored in organized local
directories on the updated server itself. Once YaST Online Update has been executed and
these RPMs are used, they can be manually removed if needed. In older versions of
YaSTs Online Update, you could check the Remove installation files box at the end of
the update, which would remove the downloaded RPM files once the machine was done
installing them.
Scheduling
Getting folks to work together can be a grueling exercise in itself, as the different folks
who administer, develop and use a specific server may have trouble coordinating an
appropriate downtime period: this is especially true for enterprise applications that
require a 24/7 uptime. One of the options that may come up is a late night restart-
yes...someone has to do it. In most cases, the best time to restart a server is when there
will be the least amount of activity on it; this could be lunchtime, right after normal
working hours or midnight. Either way, its got to be done, but be sure youre ready to
respond if something should go wrong. One option is to schedule your downtime during a
larger scale planned outage. Just be aware of possible system dependencies that your
application may rely on, like network connectivity or authentication. Those opportune
times may not be as good as you think they are, and this can lead to further havoc if you
cant verify that your machine is back in a good state.
7/30/2019 os patching
16/30
Now, for proper scheduling to happen, you need to have a clear and effective Service
Level Agreement (SLA) with your clients. Without this in place, no rules are set and you,
as the sys admin, wont have any ground to stand on when it comes to working with
others schedules. Hours of operation need to be defined with those who depend on your
system, so that you can easily identify downtime windows for working on a machine.
Fail Over
One effective way to shorten or eliminate downtime during a patch cycle is to configure
fail-over partners. Generally, this just means building two machines that run the same
app, but keeping one server as a primary box and the second server as a backup. This
keeps one machine available to the user community while the second server is in hot fail-
over mode, in case the first server should go down. When it comes to patching, the Sys
Admin can patch the backup box, have the production application operation confirmed,
and switch the applications functionality from the primary server to the secondary
server. This can be done using built in clustering utilities or a manual DNS change. Either
way, this helps prevent any long-term downtime, so users can continue with their work
with minimal or no interruption at all.
Start Up Scripts
Patches are great, but if youre not careful and dont bother to test your machine before
you reboot, you may find that your start-up scripts may have been rearranged: this is
especially likely to happen to third-party and custom applications. This change in the
start-up order can hang the machine during the start-up process, if the moved item is
dependent on the network daemon being up for it to start. In some cases, an application
will be moved to a position before the networkstartup script is executed causing the app
to hang because of the lack of a networking process for it to start with.
To avoid problems, after applying the last patch, check your startup scripts in
/etc/init.d/rcx.d or /etc/rcx.d (depending on your flavor of Linux) and verify that your
scripts havent been renamed and moved up earlier in the start-up process. This will saveyou the trouble of having to reboot the machine into single-user mode or using a rescue
disk so that you can rename the startup files.
http://www.tomsguide.com/us/security-linux-update,review-1033-7.htmlhttp://www.tomsguide.com/us/security-linux-update,review-1033-7.html7/30/2019 os patching
17/30
FIGURE 2.10:STARTUP SCRIPTS
Be sure to check your startup scripts after patching your machines. Reorganized RC
directories can keep your machine from starting up correctly, especially for applications
that require network connectivity.
http://www.tomsguide.com/us/slideshow/PatchingLinux14,0101-70379-0-2-3-1-jpg-.html7/30/2019 os patching
18/30
3. OS UPGRADING
There is one thing to understand about updating Linux: Not every distribution handlesthis process in the same fashion. In fact, some distributions are distinctly different down
to the type of file types they use for package management.
Ubuntu and Debian use .deb
Fedora, SuSE, and Mandriva use .rpm
Slackware uses .tgz archives which contain pre-built binaries
And of course there is also installing from source or pre-compiled .bin or .package
files.
As you can see there are number of possible systems (and the above list is not even closeto being all-inclusive). So to make the task of covering this topic less epic, I will coverthe Ubuntu and Fedora systems. I will touch on both the GUI as well as the command
line tools for handling system updates.
3.1 Ubuntu Linux
Ubuntu Linux has become one of the most popular of all the Linux distributions. Andthrough the process of updating a system, you should be able to tell exactly why this isthe case. Ubuntu is very user friendly. Ubuntu uses two different tools for system update:
apt-get: Command line tool.
Update Manager: GUI tool.
Figure 3.1: Ubuntu Update Manager.
7/30/2019 os patching
19/30
The Update Manger is a nearly 100% automatic tool. With this tool you will not have toroutinely check to see if there are updates available. Instead you will know updates areavailable because the Update Manager will open on your desktop (see Figure 1) as soonas the updates depending upon their type:
Security updates: Daily Non-security updates: Weekly
If you want to manually check for updates, you can do this by clicking the Administrationsub-menu of the System menu and then selecting the Update Manager entry. When theUpdate Manager opens click the Check button to see if there are updates available.
Figure 1 shows a listing of updates for an Ubuntu 9.10 installation. As you can see thereare both Important Security Updates as well asRecommended Update. If you want to getinformation about a particular update you can select the update and then click ontheDescription of update dropdown.
In order to update the packages follow these steps:
1. Check the updates you want to install. By default all updates are selected.
2. Click the Install Updates button.
3. Enter your user (sudo) password.
4. Click OK.
Figure 3. 2: Updating via command line
The updates will proceed and you can continue on with your work. Now some updatesmay require either you to log out of your desktop and log back in, or to reboot themachine. There are is a new tool in development (Ksplice) that allows even the update ofa kernel to not require a reboot.Once all of the updates are complete the Update Manage main window will returnreporting that Your system is up to date.
http://www.ksplice.com/http://www.ksplice.com/http://www.ksplice.com/http://www.ksplice.com/7/30/2019 os patching
20/30
Now let's take a look at the command line tools for updating your system. The Ubuntupackage management system is called apt. Apt is a very powerful tool that cancompletely manage your systems packages via command line. Using the command linetool has one drawback - in order to check to see if you have updates, you have to run itmanually. Let's take a look at how to update your system with the help of Apt. Follow
these steps:1. Open up a terminal window.
2. Issue the command sudo apt-get upgrade.
3. Enter your user's password.
4. Look over the list of available updates (see Figure 2) and decide if you want to gothrough with the entire upgrade.
5. To accept all updates click the 'y' key (no quotes) and hit Enter.
6. Watch as the update happens.
That's it. Your system is now up to date. Let's take a look at how the same processhappens on Fedora (Fedora 12 to be exact).
3.2 Fedora Linux
Fedora is a direct descendant of Red Hat Linux, so it is the beneficiary of the Red HatPackage Management system (rpm). Like Ubuntu, Fedora can be upgraded by:
yum: Command line tool.
GNOME (or KDE) PackageKit: GUI
tool.
.
Figure 3.3: GNOME PackageKit
Depending upon your desktop, you will either use the GNOME or the KDE front-end
for PackageKit. In order to open up this tool you simply go to the Administration
sub-menu of the System menu and select the Software Update entry. When the tool
opens (see Figure 3) you will see the list of updates. To get information about a
7/30/2019 os patching
21/30
particular update all you need to do is to select a specific package and the
information will be displayed in the bottom pane.
To go ahead with the update click theInstall Updates button. As the process happens aprogress bar will indicate where GNOME (or KDE) PackageKit is in the steps. The stepsare:
1. Resolving dependencies.
2. Downloading packages.
3. Testing changes.
4. Installing updates.
When the process is complete, GNOME (or KDE) PackageKit will report that yoursystem is update. Click the OK button when prompted.
Now let's take a look at upgrading Fedora via the command line. As stated earlier, this isdone with the help of theyum command. In order to take care of this, follow these steps:
Figure 3.4: Updating with the help of yum.
1. Open up a terminal window (Do this by going to the System Tools sub-menu of the Applications menu
and select Terminal).
2. Enter the su command to change to the super user.
3. Type your super user password and hit Enter.
4. Issue the commandyum update and yum will check to see what packages are available for update.
5. Look through the listing of updates (see Figure 4).
6. If you want to go through with the update enter 'y' (no quotes) and hit Enter.
7. Sit back and watch the updates happen.
8. Exit out of the root user command prompt by typing "exit" (no quotes) and hitting Enter.
9. Close the terminal when complete.
7/30/2019 os patching
22/30
4. OS update
You can update your system in two different ways:
from packages; from an ISO image.
4.1Updating from packages
The procedure is described below.
4.1.1. Overlay and Portage tree update
Since our binary repos are regularly updated, make sure you have the latest version ofPortage and Calculate overlay before you install or update any packages.
Your system is updated with a single command:
eix-sync
Once eix-sync is running, it will update consecutively:
all of your overlays; your Portage tree; Your eix data base.
4.1.2. Software update
For Calculate Linux 11.0 release, we created binary packages repositories for fourdistros: CLD, CLDG, CLDX and CDS; now all versions of CL have binary profiles, soby default, update is performed from binary packages. To change the default updatemethod enter the following:
eselect profile set X
where X is the number under which the desired profile is listed.
If you do not know this number, you can view the list of profiles available for your
architecture by entering:
eselect profile list
To update from binary packages use the "binary" profile.
To update the whole of packages installed on your system, execute:
emerge -uD world
7/30/2019 os patching
23/30
If you have modified USE-flags, enter:
emerge -uDN world
Some package you want to update may require a masked dependency, or changes in USE
flags. "--autounmask" should help to resolve dependencies.
4.1.3. Update configuration files
When you update packages, the configuration files of the programs are not overwritten bydefault. To view and apply new settings, execute:
dispatch-conf
The main commands of this utility are: "PageUp"/"PageDown" - browse theconfiguration file, "u" - replace the current configuration file with the new one, "z" -delete the new configuration file, "q" - quit.
If you want your configuration files to be corrected automatically, set the variable"cl_autoupdate_set" in /etc/calculate/calculate2.env, as shown below:
[main]
cl_autoupdate_set = on
In this case, always make sure that all config modifications are done correctly and usetemplates.
4.2 Updating from an ISO file
It is possible to upgrade your system by installing a new image into the free systempartition. Main settings such as user accounts, network settings, mount points, screenresolution and other will be transferred, and additional settings will be adjusted bytemplates during installation, as well.
If you have Calculate Directory Server installed, make sure that the /var/calculatedirectory is mounted from a separate partition on your hard drive. If it is not so, transferyour data and add the required mount point in /etc/fstab The update procedure is
described below:Open the console with root privileges and follow these steps:
4.2.1. Update the installer
For a correct update, always use the latest version of calculate-install. You can update itby executing:
7/30/2019 os patching
24/30
eix-sync && emerge calculate-utilities
4.2.2. Download the ISO image of the last assembled Stage
Weekly stages of our distros are accessible from the http mirror
http://mirror.cnet.kz/calculate/, in "stages". Download the latest available image:
cd /var/calculate/remote/linux
wget http://mirror.cnet.kz/calculate/CLD/stages/i686/cld-20111017-i686.iso
The command shown above must contain the correct path to the file image of your distroof the appropriate architecture.
4.2.3. Install a new version of the system
cl-install
If you update Calculate Directory Server, save a copy of your server settings andtheLDAP database by executing the following:
cl-backup
Reboot your computer. To restore theLDAP database and the server settings enter:
cl-rebuild
The main advantages of this method are:
reliability - you can always boot into the previous system, if the new one shouldever become unstable;
speed - it will require about 5-7 minutes to complete the system upgrade.
http://www.calculate-linux.org/main/en/downloadhttp://www.calculate-linux.org/main/en/download7/30/2019 os patching
25/30
5.Linux Core Dumps
Note that most of the theory (particularly the low level segmentation fault details) is also
valid for Windows platforms and other operating systems. The commands to configure
core dumps and retrieve them are Linux specific though. I also assume that the program
you are trying to debug is FreeSWITCH, but you can easily change the program name
to the one is misbehaving in your case and you should be fine.
What is a core dump?
Sometimes problems with FreeSWITCH, Asterisk or just about any other program in
Linux are hard to debug by just looking at the logs. Sometimes the process crashes and
you dont have a chance to use the CLI to look at stats. The logs may not reveal anything
particularly interesting, or just some minimal information. Sometimes you need to dig
deeper into the state of the program to poke around and see what is going on inside.
Linux process core dumps are meant for that.
The most typical case for a core dump is when a process dies violently and
unexpectedly. For example, if a programmer does something like this:
*(int*)0=0;
Is usually not that straight forward, it may be the that the programmer did not expect
certain variable to contain a NULL (0) value. The process is killed by the Linux kernelbecause by default, a Linux process does not map the memory address 0 to anything. This
causes a page fault in the processor which is trapped by the kernel, the kernel then sees
that the given process does not have anything mapped at address 0 and then sends the
SIGSEGV (Unix signal) to the process. This is called a segmentation fault.
The default signal handler for SIGSEGV dumps the memory of the process and kills the
process. This memory dump contains all the memory for this process (and all the threads
belonging to that process) and that is what is used to determine what went wrong with the
program that attempted to reference an invalid address (0 in this case, but any invalid
address can cause it).
How can I make sure the core dump will be saved?
Each process has a limit for how big this core can be. If the limit is exceeded no core
dump will be saved. By default this limit is 0!, that means no core will be dumped by
default.
7/30/2019 os patching
26/30
Before starting the process you must use ulimit. The ulimit command sets various
limits for the current process. If you execute it from the bash shell, that means the limits
are applied to your bash shell process. This also means any processes that you start from
bash will inherit those limits (because they are child processes from your bash shell).
ulimit-a
That shows you all the limits for your bash shell. In order to guarantee that a core will be
dumped you must set the core file size limit to unlimited.
ulimit-c unlimited
If you are starting the process from an init script or something like that, the init script has
to do it. Some programs are smart enough to raise their limits themselves, but is always
better to make sure you have unlimited core file size for your bash shell. You may thenwant to add those ulimit instructions inside your $HOME/.bashrc file.
Where is the core dump saved?
Eachprocess has a working directory.
That is where the process core dump will be saved by default. However, some system-
wide settings affect where the core is dumped.
/proc/sys/kernel/core_pattern and /proc/sys/kernel/core_uses_pid are 2 files that
control the base file name pattern for the core, and whether the core name will be
appended with the PID (process ID).
The recommended settings are:
mkdir-p/var/core echo"/var/core/core">/proc/sys/kernel/core_pattern echo 1 >/proc/sys/kernel/core_uses_pid
You can confirm what you just did with:
cat/proc/sys/kernel/core_pattern cat/proc/sys/kernel/core_uses_pid
These settings will cause any process in the system that crashes, to dump the core at:
/var/core/core.
7/30/2019 os patching
27/30
What if I just want a core dump without killing the process?
In some situations, if a process becomes unresponsive or the response times are not ideal.
For example, you try to execute CLI commands in Asterisk or FreeSWITCH but there is
no output, or worst, your command line prompt gets stuck. The process is still there, itmay be even processing calls, but some things are taking lot of time or just dont get
done. You can use gdb (The GNU debugger) to dump a core of the process without
killing the process and almost with no disruption of the service. I say almost because for
a large process, dumping a core may take a second or two, in that time the process is
freezed by the kernel, so active calls may drop some audio (if youre debugging some
real time audio system like Asterisk or FreeSWITCH).
The trick to do it fast is to first create a file with the GDB commands required to dump
the core.
Latest versions of CentOS include (with the gdb RPM package) the gcore command to
do everything for you. You only need to execute:
gcore $(pidof freeswitch)
To dump the core of the running process.
If you are in a system that does not include gcore, you can do the following:
echo -ne "generate-core-file\ndetach\nquit" > gdb-instructions.txt
The 3 instructions added to the file are:
generate-core-file detach quit
This will do exactly what we want. Generate the core file for the attached process, then detach from
the process (to let it continue) and then quit gdb.
You then use GDB (you may need to install it with yum install gdb) to attach to the running process,
dump the core, and get out as fast as possible.
gdb /usr/local/freeswitch/bin/freeswitch $(pidof freeswitch) -x gdb-instructions.txt
7/30/2019 os patching
28/30
The arguments to attach to the process include the original binary that was used to start it
and the PID of the running process. The -x switch tells GDB to execute the commands in
the file after attaching.
The core will be named core. by default and the path is not affected by the/proc/sys/kernel settings of the system.
This core can now be used by the developer to troubleshoot the problem.
Sometimes though, the developer will be more interested in a full back trace (stack trace),
because the core dump itself cant be easily examined in any other box than the one
where was generated, therefore it might be up to you to provide that stack trace, which
you can do with:
gdb/usr/local/freeswitch/bin/freeswitch core.(gdb)set logging file my_back_trace.txt (gdb) threadapply all bt full (gdb) quit
Then send the file my_back_trace.txt to the developer, or analyze it yourself, sometimes
is easy to spot the problems even without development experience!
7/30/2019 os patching
29/30
6. Conclusions
One thing Im always telling others is to make sure theyve got their bases covered.
Security patches arent designed to break your software, after all; their developers
designed them to protect your machine. Even so, problems can arise after a reboot. If aparticular patch does break an application, it could be for a variety of reasons, ranging
from poorly written software to out-dated standards that fall out of support in the newer
software versions you are installing.
You should definitely do the following...
Even if there are 25 or 50 patches needed, be informed of whats being done to your
machine.
Focus on your machines core services, and understand how they may be a ffected by thenew patches.
Coordinate the change with those involved with the system youre patching.
Check your backups and make sure they are available.
Make sure you have the right resources available in case you need help.
Test the patches on non-critical machines, especially if theyre similar to the production
boxes youre scheduled to patch.
Reboot the machine, if possible, since startup processes may have changed during the
update.
Finally, confirm that everything works the way it should after youre done.
When I used to install car alarms long ago, most of our customers at the alarm shop
werent there because they were being proactive. They were there at the shop because
they were the victims of an intrusion and decided to get an alarm after the fact. The same
goes with keeping your machines patched and well protected. From a security standpoint,
patching is a basic procedure that can keep your machines safe, even if you think youre
in the safest network around. Just remember, though, that even as no process is perfect,
no network is perfect either. If someone should get past your firewalls, intrusion
detection systems and the DMZ, at least you know that youve done your job and addedan extra line of defense for your Linux machines.
7/30/2019 os patching
30/30
REFERENCES:
[1] Wikipedia {www.wikipedia.org]
[2]Linux [www.linux.com]
[3] Red hat [www.redhat.com]
[4] http://www.anomaly.org/wade/linux.html
[5]http://lxr.free-electrons.com/
[6]http://kernelbook.sourceforge.net/
[7] Ubuntu Pocket Guide and Reference by Keir Thomas
[8] Understanding the Linux Kernel, by Daniel Bovet and Marco Cesati
http://www.anomaly.org/wade/linux.htmlhttp://lxr.free-electrons.com/http://lxr.free-electrons.com/http://kernelbook.sourceforge.net/http://kernelbook.sourceforge.net/http://kernelbook.sourceforge.net/http://lxr.free-electrons.com/http://www.anomaly.org/wade/linux.html