Category Archives: perf

Using –pid and –tid to record samples in specific threads

To test that a patch I wrote to fix a problem reported on lkml I ended up writing a changeset comment I think its worth sharing here, as it demonstrates how to use the –pid and –tid command line options available in perf record and perf top, so here it goes.

Jeff Moyer reported these messages:

      Warning:  ... trying to fall back to cpu-clock-ticks
    
    couldn't open /proc/-1/status
    couldn't open /proc/-1/maps
    [ls output]
    [ perf record: Woken up 1 times to write data ]
    [ perf record: Captured and wrote 0.008 MB perf.data (~363 samples) ]

That lead me and David Ahern to see that something was fishy on the thread synthesizing routines, at least for the case where the workload is started from perf record, as -1 is the default for target_tid in perf record –tid parameter, so somehow we were trying to synthesize the PERF_RECORD_MMAP and PERF_RECORD_COMM events for the thread -1, a bug.

So I investigated this and noticed that when we introduced support for recording a process and its threads using –pid some bugs were introduced and that the way to fix it was to instead of passing the target_tid to the event synthesizing routines we should better pass the thread_map that has the list of threads for a –pid or just the single thread for a –tid.

Checked in the following ways:

On a 8-way machine run cyclictest:

    [root@emilia ~]# perf record cyclictest -a -t -n -p99 -i100 -d50
    policy: fifo: loadavg: 0.00 0.13 0.31 2/139 28798
    
    T: 0 (28791) P:99 I:100 C:  25072 Min:      4 Act:    5 Avg:    6 Max:     122
    T: 1 (28792) P:98 I:150 C:  16715 Min:      4 Act:    6 Avg:    5 Max:      27
    T: 2 (28793) P:97 I:200 C:  12534 Min:      4 Act:    5 Avg:    4 Max:       8
    T: 3 (28794) P:96 I:250 C:  10028 Min:      4 Act:    5 Avg:    5 Max:      96
    T: 4 (28795) P:95 I:300 C:   8357 Min:      5 Act:    6 Avg:    5 Max:      12
    T: 5 (28796) P:94 I:350 C:   7163 Min:      5 Act:    6 Avg:    5 Max:      12
    T: 6 (28797) P:93 I:400 C:   6267 Min:      4 Act:    5 Avg:    5 Max:       9
    T: 7 (28798) P:92 I:450 C:   5571 Min:      4 Act:    5 Avg:    5 Max:       9
    ^C[ perf record: Woken up 1 times to write data ]
    [ perf record: Captured and wrote 0.108 MB perf.data (~4719 samples) ]
    
    [root@emilia ~]#

This will create one extra thread per CPU:

  [root@emilia ~]# tuna -t cyclictest -CP
                          thread       ctxt_switches
        pid SCHED_ rtpri affinity voluntary nonvoluntary             cmd
     28825   OTHER     0     0xff      2169          671      cyclictest
      28832   FIFO    93        6     52338            1      cyclictest
      28833   FIFO    92        7     46524            1      cyclictest
      28826   FIFO    99        0    209360            1      cyclictest
      28827   FIFO    98        1    139577            1      cyclictest
      28828   FIFO    97        2    104686            0      cyclictest
      28829   FIFO    96        3     83751            1      cyclictest
      28830   FIFO    95        4     69794            1      cyclictest
      28831   FIFO    94        5     59825            1      cyclictest
    [root@emilia ~]#

So we should expect only samples for the above 9 threads when using the –dump-raw-trace|-D perf report switch to look at the column with the tid:

    [root@emilia ~]# perf report -D | grep RECORD_SAMPLE | cut -d/ -f2 | cut -d: -f1 | sort | uniq -c
        629 28825
        110 28826
        491 28827
        308 28828
        198 28829
        621 28830
        225 28831
        203 28832
         89 28833
    [root@emilia ~]#

So for workloads started by perf record it seems to work, now for existing workloads, just ran cyclictest first, without perf record:

    [root@emilia ~]# tuna -t cyclictest -CP
                          thread       ctxt_switches
        pid SCHED_ rtpri affinity voluntary nonvoluntary             cmd
     28859   OTHER     0     0xff       594          200      cyclictest
      28864   FIFO    95        4     16587            1      cyclictest
      28865   FIFO    94        5     14219            1      cyclictest
      28866   FIFO    93        6     12443            0      cyclictest
      28867   FIFO    92        7     11062            1      cyclictest
      28860   FIFO    99        0     49779            1      cyclictest
      28861   FIFO    98        1     33190            1      cyclictest
      28862   FIFO    97        2     24895            1      cyclictest
      28863   FIFO    96        3     19918            1      cyclictest
    [root@emilia ~]#

and then later did:

    [root@emilia ~]# perf record --pid 28859 sleep 3
    [ perf record: Woken up 1 times to write data ]
    [ perf record: Captured and wrote 0.027 MB perf.data (~1195 samples) ]

    [root@emilia ~]#

To collect 3 seconds worth of samples for pid 28859 and its children:

    [root@emilia ~]# perf report -D | grep RECORD_SAMPLE | cut -d/ -f2 | cut -d: -f1 | sort | uniq -c
         15 28859
         33 28860
         19 28861
         13 28862
         13 28863
         10 28864
         11 28865
          9 28866
        255 28867
    [root@emilia ~]#

Works, last thing is to check if looking at just one of those threads also works:

    [root@emilia ~]# perf record --tid 28866 sleep 3
    [ perf record: Woken up 1 times to write data ]
    [ perf record: Captured and wrote 0.006 MB perf.data (~242 samples) ]
    [root@emilia ~]# perf report -D | grep RECORD_SAMPLE | cut -d/ -f2 | cut -d: -f1 | sort | uniq -c
          3 28866
    [root@emilia ~]#

Works too.

Friggin mistery…

I’m working on adding a thin python binding for the evsel and evlist abstractions I introduced in the perf tools when I stumbled on this, how peterz calls “friggin mistery mmap events”…

Lets start with the simple python script that started this:

import perf

def main():
	cpus = perf.cpu_map()
	threads = perf.thread_map()
	evsel = perf.evsel(task = 1, comm = 1,
			   wakeup_events = 1, sample_period = 1,
			   sample_id_all = 1,
			   sample_type = perf.SAMPLE_PERIOD | perf.SAMPLE_CPU | perf.SAMPLE_TID)
	evsel.open(cpus = cpus, threads = threads);
	evlist = perf.evlist()
	evlist.add(evsel)
	evlist.mmap(cpus = cpus, threads = threads)
	while True:
		evlist.poll(timeout = -1)
		for cpu in cpus:
			event = evlist.read_on_cpu(cpu)
			if not event:
				continue
			print "cpu: %2d, pid: %4d, tid: %4d" % (event.sample_cpu,
								event.sample_pid,
								event.sample_tid),
			print event

if __name__ == '__main__':
    main()

So what does this script? It will ask the kernel perf infrastructure to generate events every time a thread is created and when they exit.

Running it:

[root@emilia linux]# tools/perf/python/twatch.py
cpu:  0, pid:    0, tid:    0 { type: mmap, pid: 0, tid: 0, start: 0, length: 0x694ae632, offset: 0, filename:  }
cpu:  3, pid:    0, tid:    0 { type: mmap, pid: 0, tid: 0, start: 0x3, length: 0xc804d5ca, offset: 0, filename:  }
cpu:  6, pid:    0, tid:    0 { type: mmap, pid: 0, tid: 0, start: 0x6, length: 0xbba638e9, offset: 0, filename:  }
cpu:  1, pid: 1893, tid: 1893 { type: mmap, pid: 1893, tid: 1893, start: 0x1, length: 0x7899c915, offset: 0, filename:  }
cpu:  2, pid:   43, tid:   43 { type: mmap, pid: 43, tid: 43, start: 0x2, length: 0xca5a622e, offset: 0, filename:  }
cpu:  5, pid: 1523, tid: 1525 { type: mmap, pid: 1523, tid: 1525, start: 0x5, length: 0x9a4cd05f, offset: 0, filename:  }
cpu:  4, pid:    0, tid:    0 { type: mmap, pid: 0, tid: 0, start: 0x4, length: 0x7a4d4989, offset: 0, filename:  }

cpu:  2, pid: 1543, tid: 1543 { type: fork, pid: 1970, ppid: 1543, tid: 1970, ptid: 1543, time: 515738951270}
cpu:  6, pid: 1970, tid: 1970 { type: comm, pid: 1970, tid: 1970, comm: sshd }
cpu:  2, pid:   60, tid:   60 { type: fork, pid: 1971, ppid: 60, tid: 1971, ptid: 60, time: 515743769565}
cpu:  3, pid: 1971, tid: 1971 { type: fork, pid: 1972, ppid: 1971, tid: 1972, ptid: 1971, time: 515743810636}
cpu:  7, pid: 1972, tid: 1972 { type: comm, pid: 1972, tid: 1972, comm: modprobe }
cpu:  7, pid: 1972, tid: 1972 { type: exit, pid: 1972, ppid: 1972, tid: 1972, ptid: 1972, time: 515745080846}
cpu:  1, pid: 1971, tid: 1971 { type: exit, pid: 1971, ppid: 1971, tid: 1971, ptid: 1971, time: 515746027272}
cpu:  7, pid: 1972, tid: 1972 { type: exit, pid: 1972, ppid: 1972, tid: 1972, ptid: 1972, time: 515745086164}
cpu:  1, pid: 1971, tid: 1971 { type: exit, pid: 1971, ppid: 1971, tid: 1971, ptid: 1971, time: 515746034982}
cpu:  7, pid: 1970, tid: 1970 { type: fork, pid: 1973, ppid: 1970, tid: 1973, ptid: 1970, time: 515747725784}
cpu:  0, pid: 1278, tid: 1281 { type: fork, pid: 1278, ppid: 1278, tid: 1974, ptid: 1281, time: 515803257141}
cpu:  3, pid: 1278, tid: 1974 { type: comm, pid: 1278, tid: 1974, comm: rs:main Q:Reg }
cpu:  2, pid: 1973, tid: 1973 { type: exit, pid: 1973, ppid: 1973, tid: 1973, ptid: 1973, time: 515803913973}
cpu:  0, pid: 1970, tid: 1970 { type: fork, pid: 1975, ppid: 1970, tid: 1975, ptid: 1970, time: 515804961019}
cpu:  2, pid: 1973, tid: 1973 { type: exit, pid: 1973, ppid: 1973, tid: 1973, ptid: 1973, time: 515803917806}
cpu:  3, pid: 1975, tid: 1975 { type: fork, pid: 1976, ppid: 1975, tid: 1976, ptid: 1975, time: 515808031062}
cpu:  7, pid: 1976, tid: 1976 { type: comm, pid: 1976, tid: 1976, comm: bash }

Everything seems nice except for those first 8 mmap events, one per CPU, that I didn’t ask for in the evsel (event selector) constructor, here:

	evsel = perf.evsel(task = 1, comm = 1,
			   wakeup_events = 1, sample_period = 1,
			   sample_id_all = 1,
			   sample_type = perf.SAMPLE_PERIOD | perf.SAMPLE_CPU | perf.SAMPLE_TID

See ‘task = 1’? That asks for fork and exit events to be generated in the ring buffer, ‘comm = 1′ asks for the COMM (short, 16 bytes max, process name), but I didn’t specified ‘mmap = 1′, to ask for events to be generated when executable mmaps take place…

As I know that those events are stashed in the ring buffer by the perf_event_mmap_output routine, I thought about using ‘perf probe’ to figure out what was causing those events, so I did:

[root@emilia linux]# perf probe mmap_out=perf_event_mmap_output
Add new event:
  probe:mmap_out       (on perf_event_mmap_output)

You can now use it on all perf tools, such as:

	perf record -e probe:mmap_out -aR sleep 1

[root@emilia linux]#

That created a new event, one that will take place every time the perf_event_mmap_output is called. To reduce typing I added it aliased to ‘mmap_out’.

Now lets use it, asking for callchains to be taken every time this event happens:

[root@emilia linux]# perf record -e probe:mmap_out -g tools/perf/python/twatch.py 
cpu:  7, pid:    0, tid:    0 { type: mmap, pid: 0, tid: 0, start: 0x7, length: 0xc6b8162f, offset: 0, filename:  }
cpu:  2, pid:    0, tid:    0 { type: mmap, pid: 0, tid: 0, start: 0x2, length: 0xdf243434, offset: 0, filename:  }
cpu:  3, pid:    0, tid:    0 { type: mmap, pid: 0, tid: 0, start: 0x3, length: 0xf9f2d7ab, offset: 0, filename:  }
cpu:  5, pid: 1893, tid: 1893 { type: mmap, pid: 1893, tid: 1893, start: 0x5, length: 0xa00963c1, offset: 0, filename:  }
cpu:  1, pid:    0, tid:    0 { type: mmap, pid: 0, tid: 0, start: 0x1, length: 0x95744474, offset: 0, filename:  }
cpu:  0, pid: 1893, tid: 1893 { type: mmap, pid: 1893, tid: 1893, start: 0, length: 0xa0655031, offset: 0, filename:  }
cpu:  4, pid:    0, tid:    0 { type: mmap, pid: 0, tid: 0, start: 0x4, length: 0xb2b21883, offset: 0, filename:  }
^CTraceback (most recent call last):
  File "tools/perf/python/twatch.py", line 41, in 
    main()
[ perf record: Woken up 1 times to write data ]
  File "tools/perf/python/twatch.py", line 30, in main
    evlist.poll(timeout = -1)
[ perf record: Captured and wrote 0.012 MB perf.data (~545 samples) ]
KeyboardInterrupt
[root@emilia linux]#

And there they are, lets see what caused them?

[root@emilia linux]# perf report --stdio
# Events: 30 
#
# Overhead    Command      Shared Object                  Symbol
# ........  .........  .................  ......................
#
   100.00%  twatch.py  [kernel.kallsyms]  [k] perf_event_mmap_output
            |
            --- perf_event_mmap_output
                perf_event_mmap
               |          
               |--96.67%-- mmap_region
               |          do_mmap_pgoff
               |          |          
               |          |--93.10%-- sys_mmap_pgoff
               |          |          sys_mmap
               |          |          system_call
               |          |          __mmap
               |          |          |          
               |          |           --100.00%-- _dl_map_object
               |          |          
               |           --6.90%-- elf_map
               |                     load_elf_binary
               |                     search_binary_handler
               |                     load_script
               |                     search_binary_handler
               |                     do_execve
               |                     sys_execve
               |                     stub_execve
               |                     0x3d068a6dd7
               |          
                --3.33%-- install_special_mapping
                          arch_setup_additional_pages
                          load_elf_binary
                          search_binary_handler
                          load_script
                          search_binary_handler
                          do_execve
                          sys_execve
                          stub_execve
                          0x3d068a6dd7

#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[root@emilia linux]# 

Ok, loading ELF files, the python interpreter…

The investigation is still underway, but using the perf tools in this way sounds so cool that I thought about sharing it here :-)

Cross platform perf.data analysis support

There are still some problems related to loading vmlinux files, but those are
unrelated to the feature implemented in this patch, so will get fixed in the
next patches, but here are some results:

1. collect perf.data file on a Fedora 12 machine, x86_64, 64-bit userland

2. transfer it to a Debian Testing machine, PARISC64, 32-bit userland

acme@parisc:~/git/linux-2.6-tip$ perf buildid-list | head -5
74f9930ee94475b6b3238caf3725a50d59cb994b [kernel.kallsyms]
55fdd56670453ea66c011158c4b9d30179c1d049 /lib/modules/2.6.33-rc4-tip+/kernel/net/ipv4/netfilter/ipt_MASQUERADE.ko
41adff63c730890480980d5d8ba513f1c216a858 /lib/modules/2.6.33-rc4-tip+/kernel/net/ipv4/netfilter/iptable_nat.ko
90a33def1077bb8e97b8a78546dc96c2de62df46 /lib/modules/2.6.33-rc4-tip+/kernel/net/ipv4/netfilter/nf_nat.ko
984c7bea90ce1376d5c8e7ef43a781801286e62d /lib/modules/2.6.33-rc4-tip+/kernel/drivers/net/tun.ko

acme@parisc:~/git/linux-2.6-tip$ perf buildid-list | tail -5
22492f3753c6a67de5c7ccbd6b863390c92c0723 /usr/lib64/libXt.so.6.0.0
353802bb7e1b895ba43507cc678f951e778e4c6f /usr/lib64/libMagickCore.so.2.0.0
d10c2897558595efe7be8b0584cf7e6398bc776c /usr/lib64/libfprint.so.0.0.0
a83ecfb519a788774a84d5ddde633c9ba56c03ab /home/acme/bin/perf
d3ca765a8ecf257d263801d7ad8c49c189082317 /usr/lib64/libdwarf.so.0.0
acme@parisc:~/git/linux-2.6-tip$

acme@parisc:~/git/linux-2.6-tip$ perf report –sort comm
The file [kernel.kallsyms] cannot be used, trying to use /proc/kallsyms…

^^^^ The problem related to vmlinux handling, it shouldn’t be trying this
^^^^ rather alien /proc/kallsyms at all…

/lib64/libpthread-2.10.2.so with build id 5c68f7afeb33309c78037e374b0deee84dd441f6 not found, continuing without symbols
/lib64/libc-2.10.2.so with build id eb4ec8fa8b2a5eb18cad173c92f27ed8887ed1c1 not found, continuing without symbols
/home/acme/bin/perf with build id a83ecfb519a788774a84d5ddde633c9ba56c03ab not found, continuing without symbols
/usr/sbin/openvpn with build id f2037a091ef36b591187a858d75e203690ea9409 not found, continuing without symbols
Failed to open /lib/modules/2.6.33-rc4-tip+/kernel/drivers/net/e1000e/e1000e.ko, continuing without symbols
Failed to open /lib/modules/2.6.33-rc4-tip+/kernel/drivers/net/wireless/iwlwifi/iwlcore.ko, continuing without symbols
<SNIP more complaints about not finding the right build-ids,
those will have to wait for ‘perf archive’ or plain
copying what was collected by ‘perf record’ on the x86_64,
source machine, see further below for an example of this>

# Samples: 293085637
#
# Overhead          Command
# ……..  ……………
#
61.70%             find
23.50%             perf
5.86%          swapper
3.12%             sshd
2.39%             init
0.87%             bash
0.86%            sleep
0.59%      dbus-daemon
0.25%             hald
0.24%   NetworkManager
0.19%  hald-addon-rfki
0.15%          openvpn
0.07%             phy0
0.07%         events/0
0.05%          iwl3945
0.05%         events/1
0.03%      kondemand/0
acme@parisc:~/git/linux-2.6-tip$

Which matches what we get when running the same command for the same perf.data
file on the F12, x86_64, source machine:

[root@doppio linux-2.6-tip]# perf report –sort comm
# Samples: 293085637
#
# Overhead          Command
# ……..  ……………
#
61.70%             find
23.50%             perf
5.86%          swapper
3.12%             sshd
2.39%             init
0.87%             bash
0.86%            sleep
0.59%      dbus-daemon
0.25%             hald
0.24%   NetworkManager
0.19%  hald-addon-rfki
0.15%          openvpn
0.07%             phy0
0.07%         events/0
0.05%          iwl3945
0.05%         events/1
0.03%      kondemand/0
[root@doppio linux-2.6-tip]#

The other modes work as well, modulo the problem with vmlinux:

acme@parisc:~/git/linux-2.6-tip$ perf report –sort comm,dso 2> /dev/null | head -15
# Samples: 293085637
#
# Overhead          Command                      Shared Object
# ……..  ……………  ……………………………
#
35.11%             find                   ffffffff81002b5a
18.25%             perf                   ffffffff8102235f
16.17%             find  libc-2.10.2.so
9.07%             find  find
5.80%          swapper                   ffffffff8102235f
3.95%             perf  libc-2.10.2.so
2.33%             init                   ffffffff810091b9
1.65%             sshd  libcrypto.so.0.9.8k
1.35%             find  [e1000e]
0.68%            sleep  libc-2.10.2.so
acme@parisc:~/git/linux-2.6-tip$

And the lack of the right buildids:

acme@parisc:~/git/linux-2.6-tip$ perf report –sort comm,dso,symbol 2> /dev/null | head -15
# Samples: 293085637
#
# Overhead          Command                      Shared Object  Symbol
# ……..  ……………  ……………………………  ……
#
35.11%             find                   ffffffff81002b5a  [k] 0xffffffff81002b5a
18.25%             perf                   ffffffff8102235f  [k] 0xffffffff8102235f
16.17%             find  libc-2.10.2.so                     [.] 0x00000000045782
9.07%             find  find                               [.] 0x0000000000fb0e
5.80%          swapper                   ffffffff8102235f  [k] 0xffffffff8102235f
3.95%             perf  libc-2.10.2.so                     [.] 0x0000000007f398
2.33%             init                   ffffffff810091b9  [k] 0xffffffff810091b9
1.65%             sshd  libcrypto.so.0.9.8k                [.] 0x00000000105440
1.35%             find  [e1000e]                           [k] 0x00000000010948
0.68%            sleep  libc-2.10.2.so                     [.] 0x0000000011ad5b
acme@parisc:~/git/linux-2.6-tip$

But if we:

acme@parisc:~/git/linux-2.6-tip$ ls ~/.debug
ls: cannot access /home/acme/.debug: No such file or directory
acme@parisc:~/git/linux-2.6-tip$ mkdir -p ~/.debug/lib64/libc-2.10.2.so/
acme@parisc:~/git/linux-2.6-tip$ scp doppio:.debug/lib64/libc-2.10.2.so/* ~/.debug/lib64/libc-2.10.2.so/
acme@doppio’s password:
eb4ec8fa8b2a5eb18cad173c92f27ed8887ed1c1                   100% 1783KB 714.7KB/s   00:02
acme@parisc:~/git/linux-2.6-tip$ mkdir -p ~/.debug/.build-id/eb
acme@parisc:~/git/linux-2.6-tip$ ln -s ../../lib64/libc-2.10.2.so/eb4ec8fa8b2a5eb18cad173c92f27ed8887ed1c1 ~/.debug/.build-id/eb/4ec8fa8b2a5eb18cad173c92f27ed8887ed1c1
acme@parisc:~/git/linux-2.6-tip$ perf report –dsos libc-2.10.2.so 2> /dev/null

# dso: libc-2.10.2.so
# Samples: 64281170
#
# Overhead          Command  Symbol
# ……..  ……………  ……
#
14.98%             perf  [.] __GI_strcmp
12.30%             find  [.] __GI_memmove
9.25%             find  [.] _int_malloc
7.60%             find  [.] _IO_vfprintf_internal
6.10%             find  [.] _IO_new_file_xsputn
6.02%             find  [.] __GI_close
3.08%             find  [.] _IO_file_overflow_internal
3.08%             find  [.] malloc_consolidate
3.08%             find  [.] _int_free
3.08%             find  [.] __strchrnul
3.08%             find  [.] __getdents64
3.08%             find  [.] __write_nocancel
3.08%            sleep  [.] __GI__dl_addr
3.08%             sshd  [.] __libc_select
3.08%             find  [.] _IO_new_file_write
3.07%             find  [.] _IO_new_do_write
3.06%             find  [.] __GI___errno_location
3.05%             find  [.] __GI___libc_malloc
3.04%             perf  [.] __GI_memcpy
1.71%             find  [.] __fprintf_chk
1.29%             bash  [.] __gconv_transform_utf8_internal
0.79%      dbus-daemon  [.] __GI_strlen
#
# (For a higher level overview, try: perf report –sort comm,dso)
#
acme@parisc:~/git/linux-2.6-tip$

Which matches what we get on the source, F12, x86_64 machine:

[root@doppio linux-2.6-tip]# perf report –dsos libc-2.10.2.so
# dso: libc-2.10.2.so
# Samples: 64281170
#
# Overhead          Command  Symbol
# ……..  ……………  ……
#

14.98%             perf  [.] __GI_strcmp
12.30%             find  [.] __GI_memmove
9.25%             find  [.] _int_malloc
7.60%             find  [.] _IO_vfprintf_internal
6.10%             find  [.] _IO_new_file_xsputn
6.02%             find  [.] __GI_close
3.08%             find  [.] _IO_file_overflow_internal
3.08%             find  [.] malloc_consolidate
3.08%             find  [.] _int_free
3.08%             find  [.] __strchrnul
3.08%             find  [.] __getdents64
3.08%             find  [.] __write_nocancel
3.08%            sleep  [.] __GI__dl_addr
3.08%             sshd  [.] __libc_select
3.08%             find  [.] _IO_new_file_write
3.07%             find  [.] _IO_new_do_write
3.06%             find  [.] __GI___errno_location
3.05%             find  [.] __GI___libc_malloc
3.04%             perf  [.] __GI_memcpy
1.71%             find  [.] __fprintf_chk
1.29%             bash  [.] __gconv_transform_utf8_internal
0.79%      dbus-daemon  [.] __GI_strlen
#
# (For a higher level overview, try: perf report –sort comm,dso)
#
[root@doppio linux-2.6-tip]#

So I think this is really, really nice in that it demonstrates the portability
of perf.data files and the use of build-ids across such aliens worlds :-)

There are some things to fix tho, like the bitmap on the header, but things are
looking good.

perf on parisc64

It built, after removing -fstack-protector-all that is not available for that target and suppressing the libelf-dev tests.

Transferred a perf.data file created in a i386 machine and… it fails. Endianness issues, I guess. Will fully investigate this in the next days.

Modules encoded

Ended up encoding modules as PERF_RECORD_MMAP events details at: http://lkml.org/lkml/2010/1/7/370. Lets see how people react.

Recording where modules were loaded in perf.data

While trying to fix the build-id generation so as not to produce duplicates, I noticed another problem that needs to be solved before we can introduce perf archive and be able to analyse a perf.data file recorded in one machine on another, possibly with a different architecture and OS.

The problem is similar to the relocatable kernel problem solved today: we need to have perf events that state where kernel modules were loaded, right now we are using the current /proc/modules to get that information, but it can no longer have some modules, unloaded after perf record and before perf report.

To properly fix that we need the kernel infrastructure to emit PERF_MODULE_LOAD/PERF_MODULE_UNLOAD events just like it does when DSOs get loaded by means of executable mmap, when it emits PERF_MMAP/PERF_MUNMAP events, so that there are no races and we can support long running perf record sessions where modules get loaded/unloaded.

Tomorrow I’ll work on synthesizing such events in perf record and then when all works we can do the kernel bits and stop synthesizing then in user space.

Follow

Get every new post delivered to your Inbox.