Welcome, Guest Login

Support Center

How do I collect information for a performance-related issue?

Last Updated: Aug 15, 2016 12:38PM EDT
Collect information for a performance-related issue?

When escalating performance issues, it is often required to perform some metrics in your environment while the performance degradation is occurring. This data will allow our support and engineering teams to determine and pinpoint where a given problem stems from. In this article, we're going to discuss the tools we primarily use for troubleshooting performance issues.

1) gstack

We will often utilize 'gstacks' to take snapshots of what the threads under the ecelerity_send process are doing. This is especially helpful when determining the cause of general slowness, timeouts within Momentum, and other performance-related issues. To gather gstacks, you first need to make sure that the gdb package is installed, otherwise an error will be returned. To install the gdb package, you'll need to run:

# yum install gdb

Once that's finished, you'll want to run the following command:

# for i in `seq 1 10`; do ECPID=$(cat /var/run/ecelerity.pid.instance); echo "Taking snapshot #${i} of PID ${ECPID}"; gstack $ECPID > ecgstack.`date +"01-01-2014"`.$i.txt ; sleep 2; done

Just to keep track of when these are retrieved, please change the date from "01-01-2014" to the appropriate date. This command will dump 10 text files in your current working directory. Upon request, the support team will ask that you collect this information only while the problem is occurring, then create a tarball of these files and send them to us in your support case.

2) iostat

When determing disk input/output statistics, we may ask that you utilize the "iostat" command. This is part of the "sysstat" package for RHEL-based distributions. To install iostat, please run:

# yum install sysstat

To capture useful I/O statistics, please run:

# iostat -dxtk 5 10 | tee iostat.txt

This command will take 10 snapshots over 5-second intervals, and will redirect the output to a file in your current working directory. The flags for iostat will do the following:

-d – Show the device report
-x – Show extended statistics if available
-t – Print the time of each report for reference purposes when referring back to captured output
-k (or -m) – Print the stats in kb/sec (or mb/sec) rather than blocks/second.

3) Core dumps

Normally, Ecelerity will create trace files in /var/log/ecelerity/traces upon crashes and watchdog timeouts. However, there are occasions where a deep introspection is required that trace files do not provide. Our engineers and support team may request that you set up core dumping on a problematic MTA. To get started, please create or modify the file with the following parameters (/opt/msys/ecelerity/etc/environment):

EC_TRACE_ON_CRASH=off
export EC_TRACE_ON_CRASH


Change your sysctl settings to allow the kernel in your environment to produce core dumps:

# sysctl -w fs.suid_dumpable=1
# sysctl -w kernel.core_pattern=/var/tmp/core.%t.%e.%p


Please note, core dumps can be quite large, so please be sure to set the path for "kernel.core_pattern" to a partition where there is enough disk space to account for the full amount of physical RAM to be dumped.

Verify that your sysctl settings have been set:

# sysctl -a | grep fs.suid_dumpable

Restart ecelerity with "ulimit -c" set to unlimited so that all of the addressable memory will be dumped into the core file:

# ulimit -c unlimited; /opt/msys/ecelerity/bin/ec_ctl restart

When a core dump is produced, please proceed with the following commands to produce a backtrace from the core dump. In the same directory where the core file is produced, create a file called "core.gdb," and set the following lines:

set $outer_ctr = 0
while $outer_ctr < 6
        printf "error level %d\n", $outer_ctr
        set $inner_ctr = 0
        print log_circ_queue[$outer_ctr]
        while $inner_ctr < 11
                print log_circ_queue[$outer_ctr].logs[$inner_ctr].buffer
                set $inner_ctr++
        end
        set $outer_ctr++
end
thread apply all bt full
info registers
quit


Once that's done, execute the following command, making sure to properly reference the filename of the core dump:

# gdb -x core.gdb /opt/msys/ecelerity/sbin/ecelerity_send /var/tmp/core.ecelerity_send.XXXXX 2>&1 | tee output.txt

Now, you should have a file called "output.txt." within your current working directory. Please send this information to us so that we can escalate this information to our engineering team. Additionally, please keep the core dump available while your case is ongoing. While the backtrace created from gdb is very useful, there are occasions that we'll need the full core file to inspect further.

4) Momentum-specific commands:

There are numerous commands within Momentum's ec_console that are very useful for debugging purposes. We ask that you run the following commands from ec_console in debugging mode. To do so, enter "debugging mode" within ec_console. From there, you'll need to run the following commands. It's often handy to have a text editor in another window to copy and paste the output from these commands:
cache list all
memory
summary (executed at least 5 times)
events fd
events time (executed at least 5 times)
threads stats
threads io queue
threads cpu queue
mailq
show inbound
show outbound
 
3d340ddab8604c9deb2bbcad29739042@messagesystems.desk-mail.com
https://cdn.desk.com/
false
desk
Loading
seconds ago
a minute ago
minutes ago
an hour ago
hours ago
a day ago
days ago
about
false
Invalid characters found
/customer/en/portal/articles/autocomplete