If you are not able to access the url, go to /etc/httpd/conf.d/mrtg.conf. Here you can set different directives of apache, like,
Alias, Location, Authentication, etc according to your needs.
I generally prefer as setting Alias directive as ` Alias /mrtg /var/www/html/Cisco-MRTG ` and then allow the ip addresses you want to allow to view the mrtg graphs. You can also set the authentication parameters as per your requirement over htaccess, ldap, etc.
Hope this guide successfully helps you to set up mrtg for your organization.
Skype 22.214.171.124 is now released Finally for Linux, So Today Sharing HowTo to Install Skype 4.2 on Linux / Debian / Ubuntu / Fedora.
Skype 4.2 released for 32-Bit Versions But You can try following HowTos for 64-Bit to it might work, I have only tried on 32-Bit Systems and it worked. Let me know wit your Feedback that it is working for 64-Bits or Not.
Skype New Version is Released with Minor Improvements and Bug Fixes.
Perform Following Steps to Install Skype on Debian / Ubuntu :-
Step 1:- Remove the Skype if you previously installed with Ubuntu Repository :-
Very Happy to share a Links of One Another Distribution in Linux and It is Mageia Linux. Mageia is RPM Based Distribution.
At Last I have Mentioned a Link to Download Mageia 3 Linux for 32-Bit ( i386,i686 ) and X86_64 ( 64-Bit ) Architecture.
Mageia 3 – Linux
Major Features in Mageia 3 :-
Updates to RPM (4.11) and urpmi, which has been given a good Mageia turnout and cleanup
GRUB is the default bootloader; GRUB2 is available to test.
Revamped package groupings for installation and rpmdrake
This article will show you how to change password of LDAP Users.
In This Article, I have demonstrated that How to change your Password for those users which are in 389-ds or Redhat-ds.
This is very simple you just need to change few parameters and You will able to change password of Ldap users from 389-ds.
Note :- If you have not Customized LDAP Attributes or Access Rights for Changing Password then it will work, I have Successfully tested the same on RHEL / CentOS 5.x / CentOS 6.x / RHEL 6.x and 389-ds.
Perform Following steps for the same.
Step 1:- Make Sure your LDAP Configured and You have correct Suffix ( i.e dc=tejasbarot,dc=com )
Step 2 :- Make Sure php-ldap Package is installed.
This guide explains how you can install and use KVM for creating and running virtual machines on a CentOS 6.4 server. I will show how to create image-based virtual machines and also virtual machines that use a logical volume (LVM). KVM is short for Kernel-based Virtual Machine and makes use of hardware virtualization, i.e., you need a CPU that supports hardware virtualization, e.g. Intel VT or AMD-V.
I do not issue any guarantee that this will work for you!
1 Preliminary Note
I’m using a CentOS 6.4 server with the hostname server1.example.com and the IP address 192.168.0.100 here as my KVM host.
I had SELinux disabled on my CentOS 6.4 system. I didn’t test with SELinux on; it might work, but if not, you better switch off SELinux as well:
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
… and reboot:
We also need a desktop system where we install virt-manager so that we can connect to the graphical console of the virtual machines that we install. I’m using a Fedora 17 desktop here.
2 Installing KVM
CentOS 6.4 KVM Host:
First check if your CPU supports hardware virtualization – if this is the case, the command
If nothing is displayed, then your processor doesn’t support hardware virtualization, and you must stop here.
Now we import the GPG keys for software packages:
rpm –import /etc/pki/rpm-gpg/RPM-GPG-KEY*
To install KVM and virtinst (a tool to create virtual machines), we run
yum install kvm libvirt python-virtinst qemu-kvm
Then start the libvirt daemon:
To check if KVM has successfully been installed, run
virsh -c qemu:///system list
It should display something like this:
[root@server1 ~]# virsh -c qemu:///system list
Id Name State
If it displays an error instead, then something went wrong.
Next we need to set up a network bridge on our server so that our virtual machines can be accessed from other hosts as if they were physical systems in the network.
To do this, we install the package bridge-utils…
yum install bridge-utils
… and configure a bridge. Create the file /etc/sysconfig/network-scripts/ifcfg-br0 (please use the IPADDR, PREFIX, GATEWAY, DNS1 and DNS2 values from the /etc/sysconfig/network-scripts/ifcfg-eth0 file); make sure you use TYPE=Bridge, not TYPE=Ethernet:
(If you’re using an Ubuntu 12.04 desktop, you can install virt-manager as follows:
sudo apt-get install virt-manager
4 Creating A Debian Squeeze Guest (Image-Based) From The Command Line
CentOs 6.4 KVM Host:
Now let’s go back to our CentOS 6.4 KVM host.
Take a look at
to learn how to use virt-install.
We will create our image-based virtual machines in the directory /var/lib/libvirt/images/ which was created automatically when we installed KVM in chapter two.
To create a Debian Squeeze guest (in bridging mode) with the name vm10, 512MB of RAM, two virtual CPUs, and the disk image /var/lib/libvirt/images/vm10.img (with a size of 12GB), insert the Debian Squeeze Netinstall CD into the CD drive and run
Of course, you can also create an ISO image of the Debian Squeeze Netinstall CD (please create it in the /var/lib/libvirt/images/ directory because later on I will show how to create virtual machines through virt-manager from your Fedora desktop, and virt-manager will look for ISO images in the /var/lib/libvirt/images/ directory)…
Allocating ‘vm10.img’ | 12 GB 00:00
Creating domain… | 0 B 00:00
Domain installation still in progress. You can reconnect to
the console to complete the installation process.
Posted in May 1, 2013 ¬ 10:00 amh.Tejas Barot1 Comment »
In Linux, Most of the People Implement User Quota and Group Quota on Linux and The Might want to send e-mail warnings to the User by e-mail.
It Becomes very Important when the Mail Server is Running and You have implemented Disk Quota / User Quota / Group Quota for Users, So Its the First Basic requirement that We have to inform Users on Their Quota Usage at the Limit which we have configured.
So I am writing this article for Those Who have implemented a Quota and Do not know How to configure quota warning by e-mail.
1. quota / quota-warnquota package must be Installed.
To Confirm :-
On RHEL / CentOS :- [root@tejas-barot-linux-support ~]# rpm -qa | grep quota# Output should not be empty
On Ubuntu :- root@tejas-barot-linux-support:~# dpkg –list | grep quota# Output should not be empty
2. Partition / Device must be mounted with usrquota
On RHEL / CentOS / Ubuntu :- [root@tejas-barot-linux-support ~]#mount | grep quota# Output should not be empty
3. Quota Must be Enabled on Partition.
4. If Above command is giving you proper output then you are good to go for Further Configuration.
5. Now Open /etc/warnquota.conf and Modify Following values as per your requirement.
Enjoy Disk Quotas Enjoy WarnQuota Enjoy Linux Enjoy Open Source
MAIL_CMD = "/usr/sbin/sendmail -t"
FROM = "firstname.lastname@example.org"
SUBJECT = NOTE: Your mailbox has exceeded allocatted disk space limits
CC_TO = "email@example.com"
SUPPORT = "firstname.lastname@example.org"
PHONE = "000 111-2222"
MESSAGE = Your mailbox has exceeded the allotted limit\
on this server|Please delete any unnecessary email in your mailbox on:|
SIGNATURE = This message is automatically generated by the mail system.
Once you are done with Configuring this Parameters Save and Exit from File
Description of above Configurations :-
MAIL_CMD = Command which used to send an e-mail.
FROM = Which E-Mail ID it will display to Recipient.
SUBJECT = Subject line which you want to Mention for Warning e-mail.
CC_TO = On Which ID it will Send Carbon Copy of the Mail.
SUPPORT = Email ID which you have to mentioned as Support or anybody else where it should inform too.
PHONE = Of-course It will be a Number which you want to display as your Contact Number.
MESSAGE = Detailed Message OR Instructions Which you want to Send to the User. In Short, Body of the Mail.
SIGNATURE = Text which you have to set as your Signature.
6. Once It is configured properly, then To Send E-mail to all those users who exceeded quota then Execute Following command :-
Execute Command To Send E-Mail to All Those Users / Groups who exceeded quota or Grace Limit :-
[root@tejas-barot-linux-support ~]# warnquota
Execute Command To Send E-Mail to Particular User who exceeded quota or Grace Limit :-
GlusterFS is used in environments where high performance, redundancy and reliability are of a premium. The best part is that it’s exceedingly easy to use
GlusterFS is a file system that is designed to provide network storage that can be made redundant, fault-tolerant and scalable. It’s particularly well suited to applications that require high-performance access to large files. With GlusterFS, you can have enterprise- or scientific-research-grade storage up and running in minutes, but it wouldn’t be our first choice for the type of simple file sharing that Samba or NFS are usually used for.
Although GlusterFS can do striping (chopping the files into parts), it isn’t the preferred approach. Typically, additional storage nodes, or ‘bricks’ as they are called, are used for either replicated (redundant) data or
for distributed storage that adds capacity and improves performance.
GlusterFS expects the clients to be running the FUSE (user-space) file system driver, but since version 3.x, GlusterFS automatically enables NFS access to the volumes. The built- in NFS server offers better performance when accessing lots of small files for applications such as web serving or a remote /home directory. Bear in mind, getting GlusterFS’s NFS working alongside existing NFS shares is outside the scope of this tutorial. The most amazing thing about GlusterFS is that it’s very simple to use and maintain, as we intend to show you here.
GlusterFS is at its best when connected to Gigabit Ethernet and a large array of servers and storage devices. However, a combination of two computers or even two VMs are sufficient when learning how to use GlusterFS.
Become root by typing:
…on Ubuntu and derivatives. This saves having to type ‘sudo’ before every command. Use the ‘su’ command on other distros. Consider opening a terminal on another tab, for example, to carry out actions as a standard user.
Compare version numbers between your distro and the website. If you manually install a newer server, you might have to update the clients as well. If your distro is offering a recent enough version, you can install by typing:
apt-get install glusterfs-server
Switch to static IP
Open /etc/network/interfaces in a text editor. If present, remove the line:
adjusting the details for your network. Restart the machine and test the network.
Adding and removing volumes
Use the following command:
gluster volume create testvol 192.168.0.100:/data
This creates a volume called ‘testvol’ that is stored on the server at 192.168.0.100. The files are located in a directory called /data in the root file system of the server, and this is what GlusterFS refers to as a brick. Then, type:
gluster volume start testvol
gluster volume info
to verify that it works. You can remove this volume, later on, by typing:
gluster volume stop testvol
gluster volume delete testvol
Mount the volume locally
We’ll now mount the volume, locally, from the server itself. Create a mount point using:
Recent versions of GlusterFS automatically enable NFS access to volumes. To make it work, you need to add the portmap package to the server. Then, you can mount the volume using NFS by adding a mount point:
sudo mkdir /mnt/nfsgluster
and then typing
sudo mount -t nfs 192.168.0.100:/ testvol /mnt/nfstest/ -o tcp,vers=3
To make a client mount the share on boot, add the details of the GlusterFS NFS share to /etc/fstab in the normal way. For our example, add the line:
Begin by setting up a new server, as shown in the earlier steps. Give the new server an IP address such as 192.168.0.101. Note that you can run these commands on any GlusterFS server. Type:
gluster peer probe 192.168.0.101
and then type
gluster peer status
to check the status of the new server.
Edit hosts file
Admin can be carried out from any Gluster server, so add the servers to the hosts file of your admin machine if you prefer to work with names rather than IP addresses. For example, edit /etc/hosts with a text editor, and add a line such as 192.168.0.100 server1 for each server.
The order in which the servers are specified ensures that each contains a brick and a copy of the other brick, an arrangement that maintains redundancy if either server fails or becomes unavailable.
By default, any client can connect to a GlusterFS server. However, you can limit access using the command:
gluster volume set testvol auth. allow [list of addresses]
Note that this command supports the use of wildcards to authorise a range. You might find it more convenient to locate the configuration file for the volume in /etc/glusterd/vols/[name of volume]/. Open the file up in a text editor and scroll down to ‘option auth.addr./data.allow’. Replace the asterisk with a list or range of authorised client IP addresses.
Adding and removing bricks
You can add extra bricks on the fly, but they must be in multiples of the existing storage. For example, if you have a duplicated storage volume, you must add two bricks to expand it. Use:
You may like to prepare a blank storage drive for GlusterFS bricks. Ext4 and ext3 are supported along with XFS. Which one you choose depends on your requirements and your experience with each, but the consensus is that the advantages of XFS only start to come to the fore under huge loads and enormous storage spaces.
Examine the storage
The easiest way to check on your mounted storage is with the df -h command. gluster volume info lists all mounted and active volumes and the bricks that they are composed of. You can examine the bricks and their contents by browsing the directories normally.
Original Link :- http://www.linuxuser.co.uk/tutorials/create-your-own-high-performance-nas-using-glusterfs
Hope this will helps you all, If you face any issue regarding the same or its not working for your some how then please raise your questions / issues at http://linuxforums.tejasbarot.com
If you like this then Please Click Google +1 Button and Show Your Support. Your Support will encourage me to write more articles.