Friday, November 16, 2018

Vlookup equivalent in awk: Reader's request

Objective:Print the value of field 1 and 3 from file b.txt based on the content of field 4 in a.txt

a.txt1,2,3,DELHI,4,5
4,2,1,MUMBAI,8,2
12,11,54,PUNE,11,2
7,12,8,GOA,9,3
9,2,1,BANGALORE,2,3

b.txtPUNE,13,10000
MUMBAI,100,20000
GOA,9,4000
NOIDA,43,9000
DELHI,7,3000
GURGAON,8,800

Solution:awk -F ',' 'FNR == NR {array[$4]; next} {if ($1 in array) print $1 "," $3}' a.txt b.txt

Example:[user@server ~]$ awk -F ',' 'FNR == NR {array[$4]; next} {if ($1 in array) print $1 "," $3}' a.txt b.txt
PUNE,10000
MUMBAI,20000
GOA,4000
DELHI,3000

Friday, May 19, 2017

Turning up the Big data curtain!


Its been a long time since we have posted. There is a reason for that. Please keep reading.
Change is inevitable. Let your glorious past be cherished by others. Let it remain in your memories. Let it add value to your profession. Let it give confidence to you. But do not let it ruin your future plans.

Technology is the prime. One can stand out from the crowd only by showing the logical decision makings. As per NASSCOM, 40% of IT professionals are unskilled and unprepared to deal with upcoming IT models. My view is, out of these 40% there are 30% who knows about this but still showing no dissent towards it. People keep discussing it and losing time very fast.

Its already late if we have not upgraded ourselves yet. With a lot of study we have done about next-gen technologies, we could figure out that the IT industry is running towards automation and the jobs are on stake. Time to pulll up socks and take guard.

Based on several trends, we could reach to the following things:

1. Big Data and analytics related technologies are need of hour. Its time to analyze trends and make predictive models.
2. Virtualization is mandatory in Big companies for internal infrastructure or even for products point of view.
3. Virtualization is via Cloud now. Be it Infrastructure as a service (IAAS), Software as a service (SAAS), Platform as a service (PAAS), Data as a service (DAAS) etc.

This is just a beginning.

I personally want to make a one year plan for myself with expertise in a particular order.

Big Data and Hadoop --> Map Reduce --> Data Analytics using various tools --> Data Science --> Machine Learning -> Robotics for automation

Its measurable and time bounded and of course one year of dedicated hard work. But this is how we do for ourselves.

Yes, there can be more points and many more points but above is what which we have studied. Now need to take the things in our own way. Lets start fresh.

This blog will be more about Big data and other stuff from now onwards. We have got great testimonials from Unix/Linux professionals about our blog but now we hope to keep up with the pace of next-gen technologies. We are starting with Big data. It will be as it comes basis. The more we read, the more we post. The more queries we get, the more we answer. Lets make it a combined effort to engage ourselves in some self investment for our own benefits.

Keep studying, you are in IT!

Monday, March 16, 2015

WhatsApp voice-calling comes to all Android users

Minimum WhatsApp version required : 2.11.528

How to activate: The voice-calling can be activated by receiving a call from someone whose voice-calling is already activated. After you get the call, you need to close and then reopen the app. After that, instead of seeing the most recent chats, you will get three tabs namely Calls, Chats and Contacts. The call tab shows incoming, outgoing and missed calls at the precise times.


Beta testing is going on for the feature and this time WhatsApp has decided to provide a bigger invite-only window.

Try your luck!!

Friday, February 27, 2015

How To Fix “Device eth0 does not seem to be present, delaying initialization” Error!!


Try to start Eth0 device:
# ifup eth0
Device eth0 does not seem to be present, delaying initialisation

First step – check the MAC Addresses are set correctly:-
[root@centOS network-scripts]#    cat ifcfg-eth0 | grep HWA

HWADDR=”08:00:27:FC:73:2A”

[root@centOS network-scripts]#    cat ifcfg-eth1 | grep HWA

HWADDR=”08:00:27:09:E1:75?

[root@centOS network-scripts]#    cat ifcfg-eth2 | grep HWA

HWADDR=”08:00:27:31:2D:D6?

[root@centOS network-scripts]# ip -o link

1: lo: mtu 16436 qdisc noqueue state   UNKNOWN \    link/loopback   00:00:00:00:00:00 brd 00:00:00:00:00:00

2: eth0: mtu 1500 qdisc   pfifo_fast state UP qlen 1000\      link/ether 08:00:27:fc:73:2a brd ff:ff:ff:ff:ff:ff

3: eth8: mtu 1500 qdisc   pfifo_fast state UP qlen 1000\      link/ether 08:00:27:09:e1:75 brd ff:ff:ff:ff:ff:ff

4: eth2: mtu 1500 qdisc   pfifo_fast state UP qlen 1000\      link/ether 08:00:27:31:2d:d6 brd ff:ff:ff:ff:ff:ff

Theres no eth1 but there is an eth8. Lets check what is there in dmesg:
root@centOS network-scripts]# dmesg | grep eth1

e1000 0000:00:08.0: eth1: (PCI:33MHz:32-bit) 08:00:27:09:e1:75

e1000 0000:00:08.0: eth1: Intel(R) PRO/1000 Network Connection

udev: renamed network interface eth1 to eth8.

So udev decided to rename my interface, Why is this? A little look at /etc/udev/rules.d/70-persistent-net.rules revealed the answer:-
[root@centOS ~]# cat /etc/udev/rules.d/70-persistent-net.rules |   grep “08:00:27:09:e1:75?

SUBSYSTEM==”net”, ACTION==”add”,   DRIVERS==”?*”, ATTR{address}==”08:00:27:09:e1:75?,   ATTR{dev_id}==”0x0?, ATTR{type}==”1?,   KERNEL==”eth*”, NAME=”eth1?

SUBSYSTEM==”net”, ACTION==”add”,   DRIVERS==”?*”, ATTR{address}==”08:00:27:09:e1:75?,   ATTR{dev_id}==”0x0?, ATTR{type}==”1?,   KERNEL==”eth*”, NAME=”eth8?

Looks like there are mulitple entries for the same MAC address. Removed the incorrect entry and restart the interface using below command.

# ifup eth0
Thats it..

Saturday, February 21, 2015

Cron

The software utility Cron is a time-based job scheduler in Unix-like computer operating systems. People who set up and maintain software environments use cron to schedule jobs (commands or shell scripts) to run periodically at fixed times, dates, or intervals. It typically automates system maintenance or administration—though its general-purpose nature makes it useful for things like connecting to the Internet and downloading email at regular intervals.

Cron is driven by a crontab (cron table) file, a configuration file that specifies shell commands to run periodically on a given schedule. The crontab files are stored where the lists of jobs and other instructions to the cron daemon are kept. Users can have their own individual crontab files and often there is a system wide crontab file (usually in /etc or a subdirectory of /etc) that only system administrators can edit.





For Scheduling Repetitive Jobs: crontab

You can schedule routine system administration tasks to execute daily, weekly, or monthly by using the crontab command.
Daily crontab system administration tasks might include the following:
  • Removing files more than a few days old from temporary directories
  • Executing accounting summary commands
  • Taking snapshots of the system by using the df and ps commands
  • Performing daily security monitoring
  • Running system backups
Weekly crontab system administration tasks might include the following:
  • Rebuilding the catman database for use by the man -k command
  • Running the fsck -n command to list any disk problems
Monthly crontab system administration tasks might include the following:
  • Listing files not used during a specific month
  • Producing monthly accounting reports

Controlling Access to the crontab Command

You can control access to the crontab command by using two files in the /etc/cron.d directory: cron.deny and cron.allow. These files permit only specified users to perform crontab command tasks such as creating, editing, displaying, or removing their own crontab files.
The cron.deny and cron.allow files consist of a list of user names, one user name per line.
These access control files work together as follows:
  • If cron.allow exists, only the users who are listed in this file can create, edit, display, or remove crontab files.
  • If cron.allow does not exist, all users can submit crontab files, except for users who are listed in cron.deny.
  • If neither cron.allow nor cron.deny exists, superuser privileges are required to run the crontab command.
Superuser privileges are required to edit or create the cron.deny and cron.allow files.

Note:
  • Check for the job errors in /var/spool/mail/username file.
  • Escape percent (%) sign with backslash in command line while scheduling a job.


Saturday, February 7, 2015

January 19, 2038

What will happen on January 19, 2038?

On this date the Unix Time Stamp will cease to work due to a 32-bit overflow. Before this moment millions of applications will need to either adopt a new convention for time stamps or be migrated to 64-bit systems which will buy the time stamp a "bit" more time.

 What is the unix time stamp?

The unix time stamp is a way to track time as a running total of seconds. This count starts at the Unix Epoch on January 1st, 1970 at UTC. Therefore, the unix time stamp is merely the number of seconds between a particular date and the Unix Epoch. It should also be pointed out (thanks to the comments from visitors to this site) that this point in time technically does not change no matter where you are located on the globe. This is very useful to computer systems for tracking and sorting dated information in dynamic and distributed applications both online and client side.

Wednesday, February 4, 2015

GHOST: glibc vulnerability (CVE-2015-0235): Release Date:2015-01-27

A very serious security problem has been found in the GNU C Library (Glibc) called GHOST. How can I fix GHOST vulnerability and protect my Linux server against the attack? How do I verify that my server has been fixed against the Glibc GHOST vulnerability? And what is this all about?

A heap-based buffer overflow was found in glibc's __nss_hostname_digits_dots() function, which is used by the gethostbyname() and gethostbyname2() glibc function calls. A remote attacker able to make an application call either of these functions could use this flaw to execute arbitrary code with the permissions of the user running the application.
See more information about CVE-2015-0235 from MITRE CVE dictionary and NIST NVD.

Link to download patch for Oracle Linux:
http://linux.oracle.com/cve/CVE-2015-0235.html
Link to download patch for RHEL:
https://access.redhat.com/articles/1332213

An easy way to fix the GHOST vulnerability on a CentOS/RHEL/Fedora/Scientific Linux:
sudo yum clean all
sudo yum update

An easy way to fix the GHOST vulnerability on a Ubuntu/Debian Linux:
sudo apt-get clean
sudo apt-get update
sudo apt-get upgrade

An easy way to fix the GHOST vulnerability on a SUSE Linux Enterprise:
SUSE Linux Enterprise Software Development Kit 11 SP3

zypper in -t patch sdksp3-glibc-10206

SUSE Linux Enterprise Server 11 SP3 for VMware

zypper in -t patch slessp3-glibc-10206

SUSE Linux Enterprise Server 11 SP3

zypper in -t patch slessp3-glibc-10206

SUSE Linux Enterprise Server 11 SP2 LTSS

zypper in -t patch slessp2-glibc-10204

SUSE Linux Enterprise Server 11 SP1 LTSS

zypper in -t patch slessp1-glibc-10202

SUSE Linux Enterprise Desktop 11 SP3

zypper in -t patch sledsp3-glibc-10206

Finally run for all SUSE linux version to bring your system up-to-date:

zypper patch

Fix the GHOST vulnerability on a OpenSUSE Linux:
zypper lu
zypper up

Note: Reboot will be required after update for all Linux.

Wednesday, January 28, 2015

QMole – The World of Linux on an IPad Near You

Have you noticed that many stock applications which are free on Linux require payment on the IPhoneTM  and IPad TM? Don’t want to re-implement Linux software on IOSTM ?  QMole is the answer. QMole is a new desktop system allowing the free operation of software ported from the world of Linux on the IPad. The technology enables retaining a touch screen operation of stock GTK Linux applications without requiring their redesign or reimplementation. Unlike remote desktop solutions that require a network connection, QMole requires none. All “Linux” applications execute locally on iPad TM, just like native IOS TM applications.

http://youtu.be/ofk0M1LjdtU

Tuesday, January 27, 2015

How to create DB replication in MySql from command line

Make sure you have entries for both primary and standby servers in /etc/hosts files in both primary and standby machines.

[root@primaryhostname bin]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               localhost
::1             localhost6.localdomain6 localhost6
10.64.30.8 primaryhostname
10.64.30.9 standbyhostname


[root@standbyhostnamebin]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               localhost
::1             localhost6.localdomain6 localhost6
10.64.30.8 primaryhostname
10.64.30.9 standbyhostname

 
Run on primary server:
======================
GRANT ALL PRIVILEGES ON *.* TO root@'primaryhostname' IDENTIFIED BY '';
GRANT ALL PRIVILEGES ON *.* TO root@'standbyhostname' IDENTIFIED BY '';
CREATE USER 'repl'@'primaryhostname' IDENTIFIED BY '';
GRANT REPLICATION SLAVE ON *.* TO 'repl@'standbyhostname' IDENTIFIED BY '';
FLUSH PRIVILEGES;
show master status;
mysql> show master status;
+----------------+----------+--------------+------------------+
| File           | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+----------------+----------+--------------+------------------+
| log-bin.000001 |      98| ManuAppl     | mysql            |
+----------------+----------+--------------+------------------+
1 row in set (0.00 sec)
(Note down the File and Position)

Run on standby server:
======================
GRANT ALL PRIVILEGES ON *.* TO root@'primaryhostname' IDENTIFIED BY '';
GRANT ALL PRIVILEGES ON *.* TO root@'standbyhostname' IDENTIFIED BY '';
CREATE USER 'repl'@'standbyhostname' IDENTIFIED BY '';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'primaryhostname' IDENTIFIED BY '';
FLUSH PRIVILEGES;
show master status;
(Note down the File and Position)


Run on primary server:
======================

CHANGE MASTER TO MASTER_HOST='standbyhostname', MASTER_PORT=3306, MASTER_USER='repl', MASTER_PASSWORD='', MASTER_LOG_POS=98, MASTER_LOG_FILE='
log-bin.000001';
START SLAVE;

Run on standby server:
======================
CHANGE MASTER TO MASTER_HOST='primaryhostname', MASTER_PORT=3306, MASTER_USER='repl', MASTER_PASSWORD='', MASTER_LOG_POS=98, MASTER_LOG_FILE='
log-bin.000001';
START SLAVE;

Sunday, January 11, 2015

Taking java heap dump and thread dump in linux from command line

Taking java heap dump:
======================
jmap -dump:format=b,file=output_file.bin PID_of_process
Example:   /usr/java/jdk1.7.0_55/bin/jmap -dump:format=b,file=/tmp/heapdump_PID_15034.bin 15034

Taking java thread dump:
========================
jstack -l >output_file.txt
Example:   /usr/java/jdk1.7.0_55/bin/jstack -l 15034 >/tmp/threaddump.txt
 

Blogger news

Blogroll