| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197 | {   don't care about cache hit stats   Helgrind:Race   fun:_starpu_msi_cache_hit   ...}{   don't care about cache miss stats   Helgrind:Race   fun:_starpu_msi_cache_miss   ...}{   known race, but not problematic in practice, see comment in _starpu_tag_clear   Helgrind:LockOrder   ...   fun:_starpu_tag_free   fun:_starpu_htbl_clear_tags   ...   fun:_starpu_tag_clear   fun:starpu_shutdown   ...}{   There is actually no race on current_mode, because the mode can not change unexpectedly, until _starpu_notify_data_dependencies() is called further down. Valgrind can not know about such software rwlock.   Helgrind:Race   fun:_starpu_release_data_on_node   fun:_starpu_push_task_output   ...}{   We do not care about races on profiling statistics   Helgrind:Race   fun:_starpu_worker_get_status   fun:_starpu_worker_reset_profiling_info_with_lock   ...}{   This is racy, but since we'll always put the same values, this is not a problem.   Helgrind:Race   fun:_starpu_codelet_check_deprecated_fields   ...}{   This is racy, but we don't care, it's only a statistic   Helgrind:Race   fun:starpu_task_nsubmitted   ...}{   This is racy, but we don't care, it's only a statistic   Helgrind:Race   fun:starpu_task_nready   ...}{   This is racy, but we don't care, it's only a statistic   Helgrind:Race   fun:_starpu_bus_update_profiling_info   ...}{   fscanf error   Memcheck:Cond   ...   fun:fscanf   fun:_starpu_load_bus_performance_files   ...}{   mc / handle locking order. We always take handle before mc, except when we tidy memory, where we use trylock, but helgrind doesn't handle that :/ https://bugs.kde.org/show_bug.cgi?id=243232   Helgrind:LockOrder   fun:mutex_trylock_WRK   fun:__starpu_spin_trylock   ...}{   mc / handle locking order1   Helgrind:LockOrder   ...   fun:__starpu_spin_lock   fun:try_to_free_mem_chunk   ...}{   mc / handle locking order2   Helgrind:LockOrder   ...   fun:__starpu_spin_lock   fun:try_to_find_reusable_mem_chunk   ...}{   mc / handle locking order3   Helgrind:LockOrder   ...   fun:__starpu_spin_lock   fun:free_potentially_in_use_mc   ...}{   mc / handle locking order4   Helgrind:LockOrder   ...   fun:__starpu_spin_lock   fun:free_potentially_in_use_mc   ...}{   mc / handle locking order5   Helgrind:LockOrder   ...   fun:__starpu_spin_lock   fun:register_mem_chunk   ...}{   mc / handle locking order6   Helgrind:LockOrder   ...   fun:__starpu_spin_lock   fun:_starpu_request_mem_chunk_removal   ...}{   mc / handle locking order7   Helgrind:LockOrder   ...   fun:__starpu_spin_lock   fun:_starpu_allocate_interface   ...}{   mc / handle locking order8   Helgrind:LockOrder   ...   fun:__starpu_spin_lock   fun:_starpu_memchunk_recently_used   ...}{   TODO1: This is temporary. It perhaps does not pose problem because only the worker takes this mutex. Fixing this will require changing the scheduler interface, so that the schedulers themselves take the scheduling lock, not the caller. Filter it out for now, so we can see other races more easily.   Helgrind:LockOrder   fun:mutex_lock_WRK   fun:simple_worker_pull_task   ...}{   TODO1: This is temporary. It perhaps does not pose problem because only the worker takes this mutex. Fixing this will require changing the scheduler interface, so that the schedulers themselves take the scheduling lock, not the caller. Filter it out for now, so we can see other races more easily.   Helgrind:LockOrder   fun:mutex_lock_WRK   fun:starpu_pthread_mutex_lock_sched   fun:_starpu_sched_component_worker_lock_scheduling   fun:simple_worker_pull_task   ...}{   TODO2: This is temporary. It perhaps does not pose problem because only the worker takes this mutex. Fixing this will require changing the scheduler interface, so that the schedulers themselves take the scheduling lock, not the caller. Filter it out for now, so we can see other races more easily.   Helgrind:LockOrder   fun:mutex_lock_WRK   fun:_starpu_sched_component_lock_worker   fun:simple_worker_pull_task   ...}{   TODO2: This is temporary. It perhaps does not pose problem because only the worker takes this mutex. Fixing this will require changing the scheduler interface, so that the schedulers themselves take the scheduling lock, not the caller. Filter it out for now, so we can see other races more easily.   Helgrind:LockOrder   fun:mutex_lock_WRK   fun:_starpu_sched_component_lock_worker   fun:_starpu_sched_component_worker_lock_scheduling   fun:simple_worker_pull_task   ...}
 |