sudo is fast again

A big hammer solution:

[root@quaco ~]# rpm -e fprintd fprintd-pam
[error] [/etc/nsswitch.conf] is not a symbolic link!
[error] [/etc/nsswitch.conf] was not created by authselect!
[error] Unexpected changes to the configuration were detected.
[error] Refusing to activate profile unless those changes are removed or overwrite is requested.
Unable to disable feature [17]: File exists
[root@quaco ~]#

The warnings are not that reassuring, trying to use authselect to check the config also doesn’t bring extra confidence:

[root@quaco ~]# authselect check
[error] [/etc/nsswitch.conf] is not a symbolic link!
[error] [/etc/nsswitch.conf] was not created by authselect!
Current configuration is not valid. It was probably modified outside authselect.

The fprintd is still in the config files:

[root@quaco ~]# grep fprintd /etc/pam.d/system-auth
auth sufficient pam_fprintd.so
[root@quaco ~]#

But since it is not installed, I get my fast sudo again, back to work.

What is ‘sudo su -‘ doing?

Out of the blue sudo started taking a long time to ask for my password, so I sleeptyped:

$ strace sudo su -

sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?
$

Oops, perhaps it would be a good time for me to try using ‘perf trace’, so I tried:

perf trace --duration 5000 --call-graph=dwarf

To do system wide syscall tracing looking for syscalls taking more than 5 seconds to complete, together with DWARF callchains.

And after tweaking that –duration parameter and using –filter-pids to exclude some long timeout processes that seemed unrelated, and even without using ‘-e \!futex’ to exclude some syscalls taking that long to complete and again, looking unrelated to sudo’s being stuck I got the clue I needed from this entry:


12345.846 (25024.785 ms): sudo/3571 poll(ufds: 0x7ffdcc4376a0, nfds: 1, timeout_msecs: 25000) = 0 (Timeout)
__GI___poll (inlined)
[0x30dec] (/usr/lib64/libdbus-1.so.3.19.11)
[0x2fab0] (/usr/lib64/libdbus-1.so.3.19.11)
[0x176cb] (/usr/lib64/libdbus-1.so.3.19.11)
[0x1809f] (/usr/lib64/libdbus-1.so.3.19.11)
[0x1518b] (/usr/lib64/libdbus-glib-1.so.2.3.4)
dbus_g_proxy_call (/usr/lib64/libdbus-glib-1.so.2.3.4)
pam_sm_authenticate (/usr/lib64/security/pam_fprintd.so)
[0x41f1] (/usr/lib64/libpam.so.0.84.2)
pam_authenticate (/usr/lib64/libpam.so.0.84.2)
[0xb703] (/usr/libexec/sudo/sudoers.so)
[0xa8f4] (/usr/libexec/sudo/sudoers.so)
[0xc754] (/usr/libexec/sudo/sudoers.so)
[0x24a83] (/usr/libexec/sudo/sudoers.so)
[0x1d759] (/usr/libexec/sudo/sudoers.so)
[0x6ef3] (/usr/bin/sudo)
__libc_start_main (/usr/lib64/libc-2.29.so)
[0x887d] (/usr/bin/sudo)

So its about PAM, authentication using some fprintd module, and sudo polls with a timeout of 25000 msecs, no wonder when I first tried with –failure, to ask just for syscalls that returned some error I wasn’t getting anything…

Lets see what is this thing:

[root@quaco ~]# rpm -qf /usr/lib64/security/pam_fprintd.so
fprintd-pam-0.9.0-1.fc30.x86_64
[root@quaco ~]# rpm -q --qf "%{description}\n" fprintd-pam
PAM module that uses the fprintd D-Bus service for fingerprint
authentication.
[root@quaco ~]

I don’t recall enabling this and from a quick look this t480s doesn’t seem to have any fingerprint reader, lets see how to disable this on this Fedora 30 system…

Pahole in the news

Found another interesting article, this time mentioning a tool I wrote long ago and that, at least for kernel object files, has been working for a long time without much care on my part: pahole, go read a bit about it at Will Cohen’s “How to avoid wasting megabytes of memory a few bytes at a time” article.

Guess I should try running a companion script that tries to process all .o files in debuginfo packages to see how bad it is for non-kernel files, with all the DWARF changes over these years…

Article about ‘perf annotate’

Just found out about Ravi’s article about ‘perf annotate’, concise yet covers most features, including cross-annotation, go read it!

Looking for users of new syscalls

Recently Linux got a new syscall to get extended information about files, a super ‘stat’, if you will, read more about it at LWN.

So I grabbed the headers with the definitions for the statx arguments to tools/include/ so that ‘perf trace’ can use them to beautify, i.e. to appear as
a bitmap of strings, as described in this cset.

To test it I used one of things ‘perf trace’ can do and that ‘strace’ does not: system wide stracing. To look if any of the programs running on my machine was using the new syscall I simply did, using strace-like syntax:

# perf trace -e statx

After a few minutes, nothing… So this fedora 25 system isn’t using it in any of the utilities I used in these moments, not surprising, glibc still needs wiring statx up.

So I found out about samples/statx/test-statx.c, and after installing the kernel headers and pointing the compiler to where those files were installed, I restarted that system wide ‘perf trace’ session and ran the test program, much better:

# trace -e statx
16612.967 ( 0.028 ms): statx/562 statx(dfd: CWD, filename: /etc/passwd, flags: SYMLINK_NOFOLLOW, mask: TYPE|MODE|NLINK|UID|GID|ATIME|MTIME|CTIME|INO|SIZE|BLOCKS|BTIME, buffer: 0x7ffef195d660) = 0
33064.447 ( 0.011 ms): statx/569 statx(dfd: CWD, filename: /tmp/statx, flags: SYMLINK_NOFOLLOW|STATX_FORCE_SYNC, mask: TYPE|MODE|NLINK|UID|GID|ATIME|MTIME|CTIME|INO|SIZE|BLOCKS|BTIME, buffer: 0x7ffc5484c790) = 0
36050.891 ( 0.023 ms): statx/576 statx(dfd: CWD, filename: /etc/motd, flags: SYMLINK_NOFOLLOW, mask: BTIME, buffer: 0x7ffeb18b66e0) = 0
38039.889 ( 0.023 ms): statx/584 statx(dfd: CWD, filename: /home/acme/.bashrc, flags: SYMLINK_NOFOLLOW, mask: TYPE|MODE|NLINK|UID|GID|ATIME|MTIME|CTIME|INO|SIZE|BLOCKS|BTIME, buffer: 0x7fff1db0ea90) = 0
^C#

Ah, to get filenames fetched we need to put in place a special probe, that will collect filenames passed to the kernel right after the kernel copies it from user memory:

[root@jouet ~]# perf probe 'vfs_getname=getname_flags:72 pathname=result->name:string'
Added new event:
probe:vfs_getname    (on getname_flags:72 with pathname=result->name:string)

You can now use it in all perf tools, such as:

perf record -e probe:vfs_getname -aR sleep 1

[root@jouet ~]# trace -e open touch /etc/passwd
0.024 ( 0.011 ms): touch/649 open(filename: /etc/ld.so.cache, flags: CLOEXEC) = 3
0.056 ( 0.018 ms): touch/649 open(filename: /lib64/libc.so.6, flags: CLOEXEC) = 3
0.481 ( 0.014 ms): touch/649 open(filename: /usr/lib/locale/locale-archive, flags: CLOEXEC) = 3
0.553 ( 0.012 ms): touch/6649 open(filename: /etc/passwd, flags: CREAT|NOCTTY|NONBLOCK|WRONLY, mode: IRUGO|IWUGO) = 3
[root@jouet ~]#

Make sure you have CONFIG_DEBUG_INFO set in your kernel build or that the matching debuginfo packages are installed. This needs to be done just once per boot, ‘perf trace’ will find it in place and use it.

Lastly, if ‘perf’ is hardlinked to ‘trace’, then the later will be the same as ‘perf trace’.

Getting backtraces from arbitrary places

Needs debuginfo, either in a package-debuginfo rpm or equivalent or by building with ‘cc -g’:

[root@jouet ~]# perf probe -L icmp_rcv:52 | head -15

  52  	if (rt->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST)) {
      		/*
      		 * RFC 1122: 3.2.2.6 An ICMP_ECHO to broadcast MAY be
      		 *  silently ignored (we let user decide with a sysctl).
      		 * RFC 1122: 3.2.2.8 An ICMP_TIMESTAMP MAY be silently
      		 *  discarded if to broadcast/multicast.
      		 */
  59  		if ((icmph->type == ICMP_ECHO ||
  60  		     icmph->type == ICMP_TIMESTAMP) &&
      		    net->ipv4.sysctl_icmp_echo_ignore_broadcasts) {
      			goto error;
      		}
      		if (icmph->type != ICMP_ECHO &&
      		    icmph->type != ICMP_TIMESTAMP &&
[root@jouet ~]# perf probe icmp_rcv:59
Added new event:
  probe:icmp_rcv       (on icmp_rcv:59)

You can now use it in all perf tools, such as:

	perf record -e probe:icmp_rcv -aR sleep 1

[root@jouet ~]# perf trace --no-syscalls --event probe:icmp_rcv/max-stack=5/
     0.000 probe:icmp_rcv:(ffffffffb47b7f9b))
                          icmp_rcv ([kernel.kallsyms])
                          ip_local_deliver_finish ([kernel.kallsyms])
                          ip_local_deliver ([kernel.kallsyms])
                          ip_rcv_finish ([kernel.kallsyms])
                          ip_rcv ([kernel.kallsyms])
  1025.876 probe:icmp_rcv:(ffffffffb47b7f9b))
                          icmp_rcv ([kernel.kallsyms])
                          ip_local_deliver_finish ([kernel.kallsyms])
                          ip_local_deliver ([kernel.kallsyms])
                          ip_rcv_finish ([kernel.kallsyms])
                          ip_rcv ([kernel.kallsyms])
^C[root@jouet ~]#

Humm, lots of redundant info, guess we could do away with those ([kernel.kallsyms) in all the callchain lines…

Using –pid and –tid to record samples in specific threads

To test that a patch I wrote to fix a problem reported on lkml I ended up writing a changeset comment I think its worth sharing here, as it demonstrates how to use the –pid and –tid command line options available in perf record and perf top, so here it goes.

Jeff Moyer reported these messages:

      Warning:  ... trying to fall back to cpu-clock-ticks
    
    couldn't open /proc/-1/status
    couldn't open /proc/-1/maps
    [ls output]
    [ perf record: Woken up 1 times to write data ]
    [ perf record: Captured and wrote 0.008 MB perf.data (~363 samples) ]

That lead me and David Ahern to see that something was fishy on the thread synthesizing routines, at least for the case where the workload is started from perf record, as -1 is the default for target_tid in perf record –tid parameter, so somehow we were trying to synthesize the PERF_RECORD_MMAP and PERF_RECORD_COMM events for the thread -1, a bug.

So I investigated this and noticed that when we introduced support for recording a process and its threads using –pid some bugs were introduced and that the way to fix it was to instead of passing the target_tid to the event synthesizing routines we should better pass the thread_map that has the list of threads for a –pid or just the single thread for a –tid.

Checked in the following ways:

On a 8-way machine run cyclictest:

    [root@emilia ~]# perf record cyclictest -a -t -n -p99 -i100 -d50
    policy: fifo: loadavg: 0.00 0.13 0.31 2/139 28798
    
    T: 0 (28791) P:99 I:100 C:  25072 Min:      4 Act:    5 Avg:    6 Max:     122
    T: 1 (28792) P:98 I:150 C:  16715 Min:      4 Act:    6 Avg:    5 Max:      27
    T: 2 (28793) P:97 I:200 C:  12534 Min:      4 Act:    5 Avg:    4 Max:       8
    T: 3 (28794) P:96 I:250 C:  10028 Min:      4 Act:    5 Avg:    5 Max:      96
    T: 4 (28795) P:95 I:300 C:   8357 Min:      5 Act:    6 Avg:    5 Max:      12
    T: 5 (28796) P:94 I:350 C:   7163 Min:      5 Act:    6 Avg:    5 Max:      12
    T: 6 (28797) P:93 I:400 C:   6267 Min:      4 Act:    5 Avg:    5 Max:       9
    T: 7 (28798) P:92 I:450 C:   5571 Min:      4 Act:    5 Avg:    5 Max:       9
    ^C[ perf record: Woken up 1 times to write data ]
    [ perf record: Captured and wrote 0.108 MB perf.data (~4719 samples) ]
    
    [root@emilia ~]#

This will create one extra thread per CPU:

  [root@emilia ~]# tuna -t cyclictest -CP
                          thread       ctxt_switches
        pid SCHED_ rtpri affinity voluntary nonvoluntary             cmd
     28825   OTHER     0     0xff      2169          671      cyclictest
      28832   FIFO    93        6     52338            1      cyclictest
      28833   FIFO    92        7     46524            1      cyclictest
      28826   FIFO    99        0    209360            1      cyclictest
      28827   FIFO    98        1    139577            1      cyclictest
      28828   FIFO    97        2    104686            0      cyclictest
      28829   FIFO    96        3     83751            1      cyclictest
      28830   FIFO    95        4     69794            1      cyclictest
      28831   FIFO    94        5     59825            1      cyclictest
    [root@emilia ~]#

So we should expect only samples for the above 9 threads when using the –dump-raw-trace|-D perf report switch to look at the column with the tid:

    [root@emilia ~]# perf report -D | grep RECORD_SAMPLE | cut -d/ -f2 | cut -d: -f1 | sort | uniq -c
        629 28825
        110 28826
        491 28827
        308 28828
        198 28829
        621 28830
        225 28831
        203 28832
         89 28833
    [root@emilia ~]#

So for workloads started by perf record it seems to work, now for existing workloads, just ran cyclictest first, without perf record:

    [root@emilia ~]# tuna -t cyclictest -CP
                          thread       ctxt_switches
        pid SCHED_ rtpri affinity voluntary nonvoluntary             cmd
     28859   OTHER     0     0xff       594          200      cyclictest
      28864   FIFO    95        4     16587            1      cyclictest
      28865   FIFO    94        5     14219            1      cyclictest
      28866   FIFO    93        6     12443            0      cyclictest
      28867   FIFO    92        7     11062            1      cyclictest
      28860   FIFO    99        0     49779            1      cyclictest
      28861   FIFO    98        1     33190            1      cyclictest
      28862   FIFO    97        2     24895            1      cyclictest
      28863   FIFO    96        3     19918            1      cyclictest
    [root@emilia ~]#

and then later did:

    [root@emilia ~]# perf record --pid 28859 sleep 3
    [ perf record: Woken up 1 times to write data ]
    [ perf record: Captured and wrote 0.027 MB perf.data (~1195 samples) ]

    [root@emilia ~]#

To collect 3 seconds worth of samples for pid 28859 and its children:

    [root@emilia ~]# perf report -D | grep RECORD_SAMPLE | cut -d/ -f2 | cut -d: -f1 | sort | uniq -c
         15 28859
         33 28860
         19 28861
         13 28862
         13 28863
         10 28864
         11 28865
          9 28866
        255 28867
    [root@emilia ~]#

Works, last thing is to check if looking at just one of those threads also works:

    [root@emilia ~]# perf record --tid 28866 sleep 3
    [ perf record: Woken up 1 times to write data ]
    [ perf record: Captured and wrote 0.006 MB perf.data (~242 samples) ]
    [root@emilia ~]# perf report -D | grep RECORD_SAMPLE | cut -d/ -f2 | cut -d: -f1 | sort | uniq -c
          3 28866
    [root@emilia ~]#

Works too.

Friggin mistery…

I’m working on adding a thin python binding for the evsel and evlist abstractions I introduced in the perf tools when I stumbled on this, how peterz calls “friggin mistery mmap events”…

Lets start with the simple python script that started this:

import perf

def main():
	cpus = perf.cpu_map()
	threads = perf.thread_map()
	evsel = perf.evsel(task = 1, comm = 1,
			   wakeup_events = 1, sample_period = 1,
			   sample_id_all = 1,
			   sample_type = perf.SAMPLE_PERIOD | perf.SAMPLE_CPU | perf.SAMPLE_TID)
	evsel.open(cpus = cpus, threads = threads);
	evlist = perf.evlist()
	evlist.add(evsel)
	evlist.mmap(cpus = cpus, threads = threads)
	while True:
		evlist.poll(timeout = -1)
		for cpu in cpus:
			event = evlist.read_on_cpu(cpu)
			if not event:
				continue
			print "cpu: %2d, pid: %4d, tid: %4d" % (event.sample_cpu,
								event.sample_pid,
								event.sample_tid),
			print event

if __name__ == '__main__':
    main()

So what does this script? It will ask the kernel perf infrastructure to generate events every time a thread is created and when they exit.

Running it:

[root@emilia linux]# tools/perf/python/twatch.py
cpu:  0, pid:    0, tid:    0 { type: mmap, pid: 0, tid: 0, start: 0, length: 0x694ae632, offset: 0, filename:  }
cpu:  3, pid:    0, tid:    0 { type: mmap, pid: 0, tid: 0, start: 0x3, length: 0xc804d5ca, offset: 0, filename:  }
cpu:  6, pid:    0, tid:    0 { type: mmap, pid: 0, tid: 0, start: 0x6, length: 0xbba638e9, offset: 0, filename:  }
cpu:  1, pid: 1893, tid: 1893 { type: mmap, pid: 1893, tid: 1893, start: 0x1, length: 0x7899c915, offset: 0, filename:  }
cpu:  2, pid:   43, tid:   43 { type: mmap, pid: 43, tid: 43, start: 0x2, length: 0xca5a622e, offset: 0, filename:  }
cpu:  5, pid: 1523, tid: 1525 { type: mmap, pid: 1523, tid: 1525, start: 0x5, length: 0x9a4cd05f, offset: 0, filename:  }
cpu:  4, pid:    0, tid:    0 { type: mmap, pid: 0, tid: 0, start: 0x4, length: 0x7a4d4989, offset: 0, filename:  }

cpu:  2, pid: 1543, tid: 1543 { type: fork, pid: 1970, ppid: 1543, tid: 1970, ptid: 1543, time: 515738951270}
cpu:  6, pid: 1970, tid: 1970 { type: comm, pid: 1970, tid: 1970, comm: sshd }
cpu:  2, pid:   60, tid:   60 { type: fork, pid: 1971, ppid: 60, tid: 1971, ptid: 60, time: 515743769565}
cpu:  3, pid: 1971, tid: 1971 { type: fork, pid: 1972, ppid: 1971, tid: 1972, ptid: 1971, time: 515743810636}
cpu:  7, pid: 1972, tid: 1972 { type: comm, pid: 1972, tid: 1972, comm: modprobe }
cpu:  7, pid: 1972, tid: 1972 { type: exit, pid: 1972, ppid: 1972, tid: 1972, ptid: 1972, time: 515745080846}
cpu:  1, pid: 1971, tid: 1971 { type: exit, pid: 1971, ppid: 1971, tid: 1971, ptid: 1971, time: 515746027272}
cpu:  7, pid: 1972, tid: 1972 { type: exit, pid: 1972, ppid: 1972, tid: 1972, ptid: 1972, time: 515745086164}
cpu:  1, pid: 1971, tid: 1971 { type: exit, pid: 1971, ppid: 1971, tid: 1971, ptid: 1971, time: 515746034982}
cpu:  7, pid: 1970, tid: 1970 { type: fork, pid: 1973, ppid: 1970, tid: 1973, ptid: 1970, time: 515747725784}
cpu:  0, pid: 1278, tid: 1281 { type: fork, pid: 1278, ppid: 1278, tid: 1974, ptid: 1281, time: 515803257141}
cpu:  3, pid: 1278, tid: 1974 { type: comm, pid: 1278, tid: 1974, comm: rs:main Q:Reg }
cpu:  2, pid: 1973, tid: 1973 { type: exit, pid: 1973, ppid: 1973, tid: 1973, ptid: 1973, time: 515803913973}
cpu:  0, pid: 1970, tid: 1970 { type: fork, pid: 1975, ppid: 1970, tid: 1975, ptid: 1970, time: 515804961019}
cpu:  2, pid: 1973, tid: 1973 { type: exit, pid: 1973, ppid: 1973, tid: 1973, ptid: 1973, time: 515803917806}
cpu:  3, pid: 1975, tid: 1975 { type: fork, pid: 1976, ppid: 1975, tid: 1976, ptid: 1975, time: 515808031062}
cpu:  7, pid: 1976, tid: 1976 { type: comm, pid: 1976, tid: 1976, comm: bash }

Everything seems nice except for those first 8 mmap events, one per CPU, that I didn’t ask for in the evsel (event selector) constructor, here:

	evsel = perf.evsel(task = 1, comm = 1,
			   wakeup_events = 1, sample_period = 1,
			   sample_id_all = 1,
			   sample_type = perf.SAMPLE_PERIOD | perf.SAMPLE_CPU | perf.SAMPLE_TID

See ‘task = 1’? That asks for fork and exit events to be generated in the ring buffer, ‘comm = 1’ asks for the COMM (short, 16 bytes max, process name), but I didn’t specified ‘mmap = 1’, to ask for events to be generated when executable mmaps take place…

As I know that those events are stashed in the ring buffer by the perf_event_mmap_output routine, I thought about using ‘perf probe’ to figure out what was causing those events, so I did:

[root@emilia linux]# perf probe mmap_out=perf_event_mmap_output
Add new event:
  probe:mmap_out       (on perf_event_mmap_output)

You can now use it on all perf tools, such as:

	perf record -e probe:mmap_out -aR sleep 1

[root@emilia linux]#

That created a new event, one that will take place every time the perf_event_mmap_output is called. To reduce typing I added it aliased to ‘mmap_out’.

Now lets use it, asking for callchains to be taken every time this event happens:

[root@emilia linux]# perf record -e probe:mmap_out -g tools/perf/python/twatch.py 
cpu:  7, pid:    0, tid:    0 { type: mmap, pid: 0, tid: 0, start: 0x7, length: 0xc6b8162f, offset: 0, filename:  }
cpu:  2, pid:    0, tid:    0 { type: mmap, pid: 0, tid: 0, start: 0x2, length: 0xdf243434, offset: 0, filename:  }
cpu:  3, pid:    0, tid:    0 { type: mmap, pid: 0, tid: 0, start: 0x3, length: 0xf9f2d7ab, offset: 0, filename:  }
cpu:  5, pid: 1893, tid: 1893 { type: mmap, pid: 1893, tid: 1893, start: 0x5, length: 0xa00963c1, offset: 0, filename:  }
cpu:  1, pid:    0, tid:    0 { type: mmap, pid: 0, tid: 0, start: 0x1, length: 0x95744474, offset: 0, filename:  }
cpu:  0, pid: 1893, tid: 1893 { type: mmap, pid: 1893, tid: 1893, start: 0, length: 0xa0655031, offset: 0, filename:  }
cpu:  4, pid:    0, tid:    0 { type: mmap, pid: 0, tid: 0, start: 0x4, length: 0xb2b21883, offset: 0, filename:  }
^CTraceback (most recent call last):
  File "tools/perf/python/twatch.py", line 41, in 
    main()
[ perf record: Woken up 1 times to write data ]
  File "tools/perf/python/twatch.py", line 30, in main
    evlist.poll(timeout = -1)
[ perf record: Captured and wrote 0.012 MB perf.data (~545 samples) ]
KeyboardInterrupt
[root@emilia linux]#

And there they are, lets see what caused them?

[root@emilia linux]# perf report --stdio
# Events: 30 
#
# Overhead    Command      Shared Object                  Symbol
# ........  .........  .................  ......................
#
   100.00%  twatch.py  [kernel.kallsyms]  [k] perf_event_mmap_output
            |
            --- perf_event_mmap_output
                perf_event_mmap
               |          
               |--96.67%-- mmap_region
               |          do_mmap_pgoff
               |          |          
               |          |--93.10%-- sys_mmap_pgoff
               |          |          sys_mmap
               |          |          system_call
               |          |          __mmap
               |          |          |          
               |          |           --100.00%-- _dl_map_object
               |          |          
               |           --6.90%-- elf_map
               |                     load_elf_binary
               |                     search_binary_handler
               |                     load_script
               |                     search_binary_handler
               |                     do_execve
               |                     sys_execve
               |                     stub_execve
               |                     0x3d068a6dd7
               |          
                --3.33%-- install_special_mapping
                          arch_setup_additional_pages
                          load_elf_binary
                          search_binary_handler
                          load_script
                          search_binary_handler
                          do_execve
                          sys_execve
                          stub_execve
                          0x3d068a6dd7

#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[root@emilia linux]# 

Ok, loading ELF files, the python interpreter…

The investigation is still underway, but using the perf tools in this way sounds so cool that I thought about sharing it here 🙂

Cross platform perf.data analysis support

There are still some problems related to loading vmlinux files, but those are
unrelated to the feature implemented in this patch, so will get fixed in the
next patches, but here are some results:

1. collect perf.data file on a Fedora 12 machine, x86_64, 64-bit userland

2. transfer it to a Debian Testing machine, PARISC64, 32-bit userland

acme@parisc:~/git/linux-2.6-tip$ perf buildid-list | head -5
74f9930ee94475b6b3238caf3725a50d59cb994b [kernel.kallsyms]
55fdd56670453ea66c011158c4b9d30179c1d049 /lib/modules/2.6.33-rc4-tip+/kernel/net/ipv4/netfilter/ipt_MASQUERADE.ko
41adff63c730890480980d5d8ba513f1c216a858 /lib/modules/2.6.33-rc4-tip+/kernel/net/ipv4/netfilter/iptable_nat.ko
90a33def1077bb8e97b8a78546dc96c2de62df46 /lib/modules/2.6.33-rc4-tip+/kernel/net/ipv4/netfilter/nf_nat.ko
984c7bea90ce1376d5c8e7ef43a781801286e62d /lib/modules/2.6.33-rc4-tip+/kernel/drivers/net/tun.ko

acme@parisc:~/git/linux-2.6-tip$ perf buildid-list | tail -5
22492f3753c6a67de5c7ccbd6b863390c92c0723 /usr/lib64/libXt.so.6.0.0
353802bb7e1b895ba43507cc678f951e778e4c6f /usr/lib64/libMagickCore.so.2.0.0
d10c2897558595efe7be8b0584cf7e6398bc776c /usr/lib64/libfprint.so.0.0.0
a83ecfb519a788774a84d5ddde633c9ba56c03ab /home/acme/bin/perf
d3ca765a8ecf257d263801d7ad8c49c189082317 /usr/lib64/libdwarf.so.0.0
acme@parisc:~/git/linux-2.6-tip$

acme@parisc:~/git/linux-2.6-tip$ perf report –sort comm
The file [kernel.kallsyms] cannot be used, trying to use /proc/kallsyms…

^^^^ The problem related to vmlinux handling, it shouldn’t be trying this
^^^^ rather alien /proc/kallsyms at all…

/lib64/libpthread-2.10.2.so with build id 5c68f7afeb33309c78037e374b0deee84dd441f6 not found, continuing without symbols
/lib64/libc-2.10.2.so with build id eb4ec8fa8b2a5eb18cad173c92f27ed8887ed1c1 not found, continuing without symbols
/home/acme/bin/perf with build id a83ecfb519a788774a84d5ddde633c9ba56c03ab not found, continuing without symbols
/usr/sbin/openvpn with build id f2037a091ef36b591187a858d75e203690ea9409 not found, continuing without symbols
Failed to open /lib/modules/2.6.33-rc4-tip+/kernel/drivers/net/e1000e/e1000e.ko, continuing without symbols
Failed to open /lib/modules/2.6.33-rc4-tip+/kernel/drivers/net/wireless/iwlwifi/iwlcore.ko, continuing without symbols
<SNIP more complaints about not finding the right build-ids,
those will have to wait for ‘perf archive’ or plain
copying what was collected by ‘perf record’ on the x86_64,
source machine, see further below for an example of this>

# Samples: 293085637
#
# Overhead          Command
# ……..  ……………
#
61.70%             find
23.50%             perf
5.86%          swapper
3.12%             sshd
2.39%             init
0.87%             bash
0.86%            sleep
0.59%      dbus-daemon
0.25%             hald
0.24%   NetworkManager
0.19%  hald-addon-rfki
0.15%          openvpn
0.07%             phy0
0.07%         events/0
0.05%          iwl3945
0.05%         events/1
0.03%      kondemand/0
acme@parisc:~/git/linux-2.6-tip$

Which matches what we get when running the same command for the same perf.data
file on the F12, x86_64, source machine:

[root@doppio linux-2.6-tip]# perf report –sort comm
# Samples: 293085637
#
# Overhead          Command
# ……..  ……………
#
61.70%             find
23.50%             perf
5.86%          swapper
3.12%             sshd
2.39%             init
0.87%             bash
0.86%            sleep
0.59%      dbus-daemon
0.25%             hald
0.24%   NetworkManager
0.19%  hald-addon-rfki
0.15%          openvpn
0.07%             phy0
0.07%         events/0
0.05%          iwl3945
0.05%         events/1
0.03%      kondemand/0
[root@doppio linux-2.6-tip]#

The other modes work as well, modulo the problem with vmlinux:

acme@parisc:~/git/linux-2.6-tip$ perf report –sort comm,dso 2> /dev/null | head -15
# Samples: 293085637
#
# Overhead          Command                      Shared Object
# ……..  ……………  ……………………………
#
35.11%             find                   ffffffff81002b5a
18.25%             perf                   ffffffff8102235f
16.17%             find  libc-2.10.2.so
9.07%             find  find
5.80%          swapper                   ffffffff8102235f
3.95%             perf  libc-2.10.2.so
2.33%             init                   ffffffff810091b9
1.65%             sshd  libcrypto.so.0.9.8k
1.35%             find  [e1000e]
0.68%            sleep  libc-2.10.2.so
acme@parisc:~/git/linux-2.6-tip$

And the lack of the right buildids:

acme@parisc:~/git/linux-2.6-tip$ perf report –sort comm,dso,symbol 2> /dev/null | head -15
# Samples: 293085637
#
# Overhead          Command                      Shared Object  Symbol
# ……..  ……………  ……………………………  ……
#
35.11%             find                   ffffffff81002b5a  [k] 0xffffffff81002b5a
18.25%             perf                   ffffffff8102235f  [k] 0xffffffff8102235f
16.17%             find  libc-2.10.2.so                     [.] 0x00000000045782
9.07%             find  find                               [.] 0x0000000000fb0e
5.80%          swapper                   ffffffff8102235f  [k] 0xffffffff8102235f
3.95%             perf  libc-2.10.2.so                     [.] 0x0000000007f398
2.33%             init                   ffffffff810091b9  [k] 0xffffffff810091b9
1.65%             sshd  libcrypto.so.0.9.8k                [.] 0x00000000105440
1.35%             find  [e1000e]                           [k] 0x00000000010948
0.68%            sleep  libc-2.10.2.so                     [.] 0x0000000011ad5b
acme@parisc:~/git/linux-2.6-tip$

But if we:

acme@parisc:~/git/linux-2.6-tip$ ls ~/.debug
ls: cannot access /home/acme/.debug: No such file or directory
acme@parisc:~/git/linux-2.6-tip$ mkdir -p ~/.debug/lib64/libc-2.10.2.so/
acme@parisc:~/git/linux-2.6-tip$ scp doppio:.debug/lib64/libc-2.10.2.so/* ~/.debug/lib64/libc-2.10.2.so/
acme@doppio’s password:
eb4ec8fa8b2a5eb18cad173c92f27ed8887ed1c1                   100% 1783KB 714.7KB/s   00:02
acme@parisc:~/git/linux-2.6-tip$ mkdir -p ~/.debug/.build-id/eb
acme@parisc:~/git/linux-2.6-tip$ ln -s ../../lib64/libc-2.10.2.so/eb4ec8fa8b2a5eb18cad173c92f27ed8887ed1c1 ~/.debug/.build-id/eb/4ec8fa8b2a5eb18cad173c92f27ed8887ed1c1
acme@parisc:~/git/linux-2.6-tip$ perf report –dsos libc-2.10.2.so 2> /dev/null

# dso: libc-2.10.2.so
# Samples: 64281170
#
# Overhead          Command  Symbol
# ……..  ……………  ……
#
14.98%             perf  [.] __GI_strcmp
12.30%             find  [.] __GI_memmove
9.25%             find  [.] _int_malloc
7.60%             find  [.] _IO_vfprintf_internal
6.10%             find  [.] _IO_new_file_xsputn
6.02%             find  [.] __GI_close
3.08%             find  [.] _IO_file_overflow_internal
3.08%             find  [.] malloc_consolidate
3.08%             find  [.] _int_free
3.08%             find  [.] __strchrnul
3.08%             find  [.] __getdents64
3.08%             find  [.] __write_nocancel
3.08%            sleep  [.] __GI__dl_addr
3.08%             sshd  [.] __libc_select
3.08%             find  [.] _IO_new_file_write
3.07%             find  [.] _IO_new_do_write
3.06%             find  [.] __GI___errno_location
3.05%             find  [.] __GI___libc_malloc
3.04%             perf  [.] __GI_memcpy
1.71%             find  [.] __fprintf_chk
1.29%             bash  [.] __gconv_transform_utf8_internal
0.79%      dbus-daemon  [.] __GI_strlen
#
# (For a higher level overview, try: perf report –sort comm,dso)
#
acme@parisc:~/git/linux-2.6-tip$

Which matches what we get on the source, F12, x86_64 machine:

[root@doppio linux-2.6-tip]# perf report –dsos libc-2.10.2.so
# dso: libc-2.10.2.so
# Samples: 64281170
#
# Overhead          Command  Symbol
# ……..  ……………  ……
#

14.98%             perf  [.] __GI_strcmp
12.30%             find  [.] __GI_memmove
9.25%             find  [.] _int_malloc
7.60%             find  [.] _IO_vfprintf_internal
6.10%             find  [.] _IO_new_file_xsputn
6.02%             find  [.] __GI_close
3.08%             find  [.] _IO_file_overflow_internal
3.08%             find  [.] malloc_consolidate
3.08%             find  [.] _int_free
3.08%             find  [.] __strchrnul
3.08%             find  [.] __getdents64
3.08%             find  [.] __write_nocancel
3.08%            sleep  [.] __GI__dl_addr
3.08%             sshd  [.] __libc_select
3.08%             find  [.] _IO_new_file_write
3.07%             find  [.] _IO_new_do_write
3.06%             find  [.] __GI___errno_location
3.05%             find  [.] __GI___libc_malloc
3.04%             perf  [.] __GI_memcpy
1.71%             find  [.] __fprintf_chk
1.29%             bash  [.] __gconv_transform_utf8_internal
0.79%      dbus-daemon  [.] __GI_strlen
#
# (For a higher level overview, try: perf report –sort comm,dso)
#
[root@doppio linux-2.6-tip]#

So I think this is really, really nice in that it demonstrates the portability
of perf.data files and the use of build-ids across such aliens worlds 🙂

There are some things to fix tho, like the bitmap on the header, but things are
looking good.

perf on parisc64

It built, after removing -fstack-protector-all that is not available for that target and suppressing the libelf-dev tests.

Transferred a perf.data file created in a i386 machine and… it fails. Endianness issues, I guess. Will fully investigate this in the next days.