2010/05/27

vmstat with timestamp




How to insert time stamp

1.Create the following simple timestamp.pl

# vi timestamp.pl

#!/usr/bin/perl
while (<>) { print localtime() . ": $_"; }



2. Run vmstat and pipe the output to timestamp.pl

# chmod 755 timestamp.pl
# vmstat 1 5 | ./timestamp.pl
Tue Dec 8 16:54:14 2009: kthr memory page
disk faults cpu
Tue Dec 8 16:54:14 2009: r b w swap free re mf pi po fr de sr m0
m1 m4 m5 in sy cs us sy id
Tue Dec 8 16:54:14 2009: 0 0 0 7506072 518520 52 102 65 4 4 0 0 1
8 0 2 1551 3179 3610 2 2 96
Tue Dec 8 16:54:15 2009: 0 0 0 7158824 237504 113 238 155 0 0 0 0 421
0 0 2 495 1120 1154 1 2 98
Tue Dec 8 16:54:16 2009: 0 0 0 7158824 237416 2 2 0 0 0 0 0 0
0 0 0 359 797 789 1 0 99
Tue Dec 8 16:54:17 2009: 0 0 0 7158824 237400 36 42 0 0 0 0 0 0
0 0 0 513 740 1149 0 1 99
Tue Dec 8 16:54:18 2009: 0 0 0 7158824 237248 2 2 0 0 0 0 0 0
0 0 0 356 611 729 1 1 98

Touch command

How to change creation date and creation time of existing file

Example:

$ ls -l abc
-rw-rw-r-- 1 sunshine se 16645 Apr 30 10:34 abc
$ touch -t 201003010800 abc
$ ls -l abc
-rw-rw-r-- 1 sunshine se 16645 Mar 1 08:00 abc
==============================================
Option
-t time
Uses the specified time instead of the current time.
time will be a decimal number of the form:

[[CC]YY]MMDDhhmm[.SS]

2009/04/08

Tunning Helper process

Summary

On the secondary, it is now possible to start "helper" processes to improve the rate at which the transaction stream from the primary can be processed. mupip replicate –receiver -start now accepts an additional qualifier -he[lpers]=[m[,n]], where m is the total number of helper processes and n is the number of reader helper processes. There are additional parameters in the database file header to tune the performance of the update process and its helpers. DSE can be used to modify these parameters. (D9E10-002497)


Detailed Description


GT.M replication can be thought of as a pipeline, where a transaction [1] is committed at the primary, transported over a TCP connection to the secondary, and committed at the secondary. While there is buffering to handle load spikes, the sustained throughput of the pipeline is limited by the capacity of its narrowest stage. Except when the bottleneck is the first stage, there will be a build up of a backlog of transactions within the pipeline. [2] Note also that there is always a bottleneck that limits throughput - if there were no bottleneck, throughput would be infinite.
Since GT.M has no control over the network from the primary to the secondary, it is not discussed here. If the network is the bottleneck, the only solution is to increase its capacity.
Unusual among database engines in not having a daemon, the GT.M database performs best when there are multiple processes accessing the database and cooperating with one another to manage it. When GT.M replication is in use at a logical dual site deployment of an application, the processes at the primary need to execute business logic to compute database updates, whereas the processes at the secondary do not. Thus, if throughput at the primary is limited by the execution of business logic, the primary can be the bottleneck, and there would be no backlog. On the other hand, if the throughput at the primary is limited by the rate at which the database can commit data, it is conceivable that the multiple processes of the primary can outperform a secondary with a solitary update process, thus causing the build-up of a backlog. To a first approximation, there are two ways that the multiple GT.M processes of a primary that executes business logic can outperform a secondary executing only one GT.M process on identical hardware:


1. In order to update a database, the database blocks to be updated must first be read from disk, into the operating system buffers and thence into the GT.M global buffer cache. On the primary, the execution of business logic will itself frequently bring the blocks to be updated into the global buffer cache, since the global variables to be updated are likely to be read by the application code before they are updated.

2. When updating a database, the database blocks and journal generated by one process may well be written to disk by an entirely different process, which better exploits the IO parallelism of most modern operating systems.


For those situations in which the update process on the secondary is a bottleneck, GT.M V5.0-000 implements the concept of helper processes to increase database throughput on the secondary. There can be a maximum of 128 helper processes.


On the secondary, the receive server process communicates with the primary and feeds a stream of update records into the receive pool. The update process reads these update records and applies them to the journal and database files via the journal buffers and global buffer cache in shared memory. Helper processes operate as follows:

1. Reader helper processes read the update records in the receive pool and attempt to pre-fetch blocks to be updated into the global buffer cache, so that they are available for the update process when it needs them.
2. Writer helper processes help to exploit the operating system's IO parallelism the way additional GT.M processes do on the primary.


MUPIP Commands

The primary interface for managing helper processes is MUPIP.
The command used to start the receiver server, mupip replicate -receiver -start now takes an additional qualifier, -he[lpers][=m[,n]] to start helper processes.
- If the qualifier is not used, or if -helpers=0[,n] is specified, no helper processes are started.
- If the qualifier is used, but neither m nor n is specified, the default number of helper processes with the default proportion of roles is started. In V5.0-000, the default number of aggregate helper processes is 8, of which 5 are reader helpers.
- If the qualifier is used, and m is specified, but n is not specified, m helper processes are started of which floor(5*m/8) processes are reader helpers.
- If both m and n are specified, m helper processes are started of which n are reader helpers. If mmupip replicate -updhelper -reader and mupip replicate -updhelper -writer. On OpenVMS, readers have the prefix GTMUHR and writers have the prefix GTMUHW.
Shutting down the receiver server normally, replicate -receiver -shutdown will also shutdown all helper processes. The command mupip replicate -receiver -shutdown -he[lpers] will shut down only the helper processes leaving the receiver server and update process to continue operating.


Individual helper processes can be shut down with the mupip stop command. Fidelity
recommends against this course of action except in the event of some unforseen abnormal
event.

mupip replicate -receiver -checkhealth accepts the optional qualifier -he[lpers]. If -he[lpers] is
specified, the status of helper processes is displayed in addition to the status of receiver server and update process.

DSE Commands

There are a number of parameters in the database file header that control the behavior of helper processes, and which can be tuned for performance. Although it is believed that the performance of the update process with helper processes is not very sensitive to the values of the parameters over a broad range, each operating environment will be different because the helper processes must strike a balance. For example, if the reader processes are not aggressive enough in bringing database blocks into the global buffer cache, this work will be done by the update process, but if the reader processes are too aggressive, then the cache blocks they use for
these database blocks may be overwritten by the update process to commit transactions that are earlier in the update stream. [3]


The DSE dump -fileheader -u[pdproc] command can be used to get a dump of the file header including these helper process parameters, and the DSE change -fileheader command can be used to modify the values of these parameters


Average Blocks Read per 100 Records

The records in the update stream received from the primary describe logical updates to global variables. Each update will involve reading one or more database blocks. Avg blks read per 100 records is an estimate of the number of database blocks that will be read for 100 update records. A good value to use is the average height of the tree on disk for a global variable. In V5.0-000, the default value is 200, which would be a good approximation for a small global variable (one index block plus one data block). For very large databases, the value could be increased up to 400.

The DSE command change -fileheader -avg_blks_read=n sets the value of Avg blks read per 100 Records to n for the current region.


Update Process Reserved Area

When so requested by the update process, reader helpers will read global variables referenced by records from the receive pool. The number of records read from the receive pool will be:

(100-upd_reserved_area)*No_of_global_buffers/avg_blks_read

In other words, this field an approximate percentage (integer value 0 to 100) of the number of global buffers reserved for the update process to use, and the reader helper processes will leave at least this percentage of the global buffers for the update process to use. In V5.0-000, the default value is 50, i.e., 50% global buffers are reserved for update process and up to 50% will be filled by reader helper processes.

The DSE command change -fileheader -upd_reserved_area=n sets the value of Upd reserved area to n for the current region.


Pre read trigger factor

When the reader helpers have read the number of update records from the receive pool, they will suspend their reading. Whenever the update process processes Pre read trigger factor percentage of Upd reserved area, it will signal the reader helper processes to resume processing journal records and reading global variables into the global buffer cache. In V5.0-000, the default value is 50, i.e., when 50% of the upd reserved area global buffers are processed by update process, it triggers the reader helpers to resume, in case they were idle. The number of
records read by update process to signal reader helpers to resume reading will be:

upd_reserved_area*pre_read_trigger_factor*No_of_global_buffers/(avg_blks_read*100)

The DSE command change -file_header -pre_read_trigger_factor=n sets the value of Pre read trigger factor to n for the current region.

Update writer trigger factor


One of the parameters used by GT.M to manage the database is the flush trigger. One of several conditions that triggers that causes normal GT.M processes to initiate flushing dirty buffers from the database global buffer cache is when the number of dirty buffers crosses the flush trigger. GT.M processes dynamically tune this value in normal use. In an attempt to never require the update process itself to flush dirty buffers, when the number of dirty global buffers crosses upd writer trigger factor of the flush trigger, writer helper processes start flushing dirty buffers to
disk. In V5.0-000, the default value is 33, i.e., 33%.

The DSE command change -file_header -upd_writer_trigger_factor=n sets the value of Upd writer trigger factor to n for the current region.

2008/12/26

Backing Up and Restoring the Solaris OS With "ufsdump"

Backing Up and Restoring the Solaris OS With "ufsdump"

This Tech Tip describes a backup and restore procedure for the Solaris 8 or 9 Operating System

using the ufsdump command.
Backing Up the OS

1. For this example, we are using c0t0d0s0 as a root partition. Bring the system into single-user mode (recommended).
# init -s
2. Check the partition consistency.
# fsck -m /dev/rdsk/c0t0d0s0
3. Verify the tape device status:
# mt status
Or use this command when you want to specify the raw tape device, where x is the interface:
# mt -f /dev/rmt/x status
4. Back up the system:
a) When the tape drive is attached to your local system, use this:
# ufsdump 0uf /dev/rmt/0n /
b) When you want to back up from disk to disk, for example, if you want to back up c0t0d0s0 to c0t1d0s0:
# mkdir /tmp/backup
# mount /dev/dsk/c0t1d0s0 /tmp/backup
# ufsdump 0f - / (cd /tmp/backup;ufsrestore xvf -)
c) When you want to back up to a remote tape, use this. On a system that has a tape drive, add the following line to its /.rhosts file:hostname root where hostname is the name or IP of the system that will run ufsdump to perform the backup. Then run the following command:
# ufsdump 0uf remote_hostname:/dev/rmt/0n /

Restoring the OS
1. For this example, your OS disk is totally corrupted and replaced with a new disk. Go to the ok prompt and boot in single-user mode from the Solaris CD.
ok> boot cdrom -s
2. Partition your new disk in the same way as your original disk.
3. Format all slices using the newfs command. For example:
# newfs /dev/rdsk/c0t0d0s0
4. Make a new directory in /tmp:
# mkdir /tmp/slice0
5. Mount c0t0d0s0 into /tmp/slice0:
# mount /dev/dsk/c0t0d0s0 /tmp/slice0
6. Verify the status of the tape drive:
# mt status
If the tape drive is not detected, issue the following command:
# devfsadm -c tape
or# drvconfig
# tapes
# devlinks
Verify the status of tape drive again and make sure the backup tape is in the first block or file number is zero. Use the following command to rewind the backup tape:
# mt rewind
7. Go into the /tmp/slice0 directory and you can start restoring the OS.
# cd /tmp/slice0
# ufsrestore rvf /dev/rmt/0n
If you want to restore from another disk (such as c0t1d0s0), use the following command:
# mkdir /tmp/backup
# mount /dev/dsk/c0t1d0s0 /tmp/backup
# ufsdump 0f - /tmp/backup (cd /tmp/slice0;ufsrestore xvf -)
8. After restoring all the partitions successfully, install bootblock to make the disk bootable. This example assumes your /usr is located inside the "/" partition:
# cd /tmp/slice0/usr/platform/'uname -m'/lib/fs/ufs
# installboot bootblk /dev/rdsk/c0t0d0s0
9. To finish restoring your OS, reboot the system.

Performance Forensics

Performance Forensics
Description: This article discusses how to address system-performance complaints with predictable and accurate results

http://www.sun.com/solutions/blueprints/1203/817-4...

Beginners Guide to LDoms: Understanding and Deploying Logical Domains

Beginners Guide to LDoms: Understanding and Deploying Logical Domains

Description: This stuff is intended to assist the reader in gaining an understanding of how to easily and effectively deploy Sun's Logical Domains, or LDoms1, technology. It will help the reader determine how and where to use logical domains to the greatest effect using best practices. It discusses strategies for deploying logical domains on the Sun Fire T1000 and T2000 systems, the first systems to offer Logical Domain support, and the various best practices for these platforms. The guide works through step-by-step examples that include the commands to set up, deploy, and manage logical domains and looks at commonly asked questions and advanced techniques.
http://www.sun.com/blueprints/0207/820-0832.pdf