[so] [Tema4][Linux] Memcheck failed

Mihai Barbulescu b12mihai at gmail.com
Wed Apr 24 14:30:07 EEST 2019


Salut Rares,

Sunt mai multe chestii pe care nu le inteleg:

1. Cum adica test 16 (15 pe vm)? Ele au un identificator clar, deci nu
inteleg ce test pica
2. OOM manager din Linux da kill atunci cand se fac alocari (page
faults) excesive peste o limita setata daca imi amintesc bine de
ulimit. Eroarea pare una de vagrant, deci e posibil sa fi pornit
folosind vagrant masina si sa ti se fi dat kill la vagrant.

Ne dai atat de putine detalii despre cum rulezi incat nu putem face
altceva decat sa speculam mai ceva ca la bursa.

On Wed, 24 Apr 2019 at 10:16, Rares Folea via so <so at cursuri.cs.pub.ro> wrote:
>
> Multumesc de hint.
> Intradevar, problema era ca nu asteptam in toate scenariile, dupa toate threadurile.
>
> Acum vad ca trec toate testele de memorie pe vmchecker.
>
>
> Doar pe masina virtuala, cand rulez valgrind peste acel test 16 (15 pe vm), logurile de sistem indica out-of-memory:
> Apr 24 10:11:14 vagrant kernel: [109075.123564] Out of memory: Kill process 11250 (memcheck-amd64-) score 866 or sacrifice child
> Apr 24 10:11:14 vagrant kernel: [109075.124387] Killed process 11250 (memcheck-amd64-) total-vm:4736696kB, anon-rss:309400kB, file-rss:0kB, shmem-rss:0kB
> Apr 24 10:11:14 vagrant kernel: [109075.153180] oom_reaper: reaped process 11250 (memcheck-amd64-), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
>
>
> On 23 Apr 2019, at 22:27, Razvan Crainea <razvan.crainea at gmail.com> wrote:
>
> Salut, Rareș!
>
> Cel mai probabil nu aștepți toate thread-urile, de acolo apare leak-ul.
> Legat de faptul că că procesul este omorât, poți verifica log-urile de
> sistem să te asiguri că nu este un crash?
>
> Numai bine,
> Răzvan
>
> On Tue, Apr 23, 2019 at 9:10 PM Rares Folea via so <so at cursuri.cs.pub.ro> wrote:
>
>
> Buna seara!
>
> Ma confrunt cu urmatoarele probleme referitoare la testele de memorie:
>
> La rularea pe masina virtuala cu valgrind a testelor 15 si 19 (14 si 18 pe vmchecker), obtin 2 erori similare:
>
> ==4684== 288 bytes in 1 blocks are possibly lost in loss record 2 of 2
> ==4684==    at 0x4C3204A: calloc (vg_replace_malloc.c:762)
> ==4684==    by 0x40134A6: allocate_dtv (dl-tls.c:286)
> ==4684==    by 0x40134A6: _dl_allocate_tls (dl-tls.c:530)
> ==4684==    by 0x5049227: allocate_stack (allocatestack.c:627)
> ==4684==    by 0x5049227: pthread_create@@GLIBC_2.2.5 (pthread_create.c:644)
> ==4684==    by 0x4E3E65B: so_fork (in /home/student/l3-so-assignments/4-scheduler/checker-lin/libscheduler.so)
> ==4684==    by 0x10ABDE: test_sched_handler_15 (test_exec.c:282)
> ==4684==    by 0x4E3E47E: start_thread (in /home/student/l3-so-assignments/4-scheduler/checker-lin/libscheduler.so)
> ==4684==    by 0x50486DA: start_thread (pthread_create.c:463)
> ==4684==    by 0x538188E: clone (clone.S:95)
>
> ==4684==      possibly lost: 576 bytes in 2 blocks
>
> ==4684== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0)
>
> Nu reusesc sa-mi dau seama de la ce ar putea fi cele doua erori si de ce se manifesta doar la aceste doua teste.
> Mentionez ca astept terminarea thread-urilor cu pthread_join, care ar trebui sa efectueze eliberarea resurselor din structura pthread_t.
> In plus, la testul 16 (15 vmchecker), se pare ca valgrind ocupa mult prea multa memorie locala, iar ca urmare este omorat.
>
> student at vagrant:~/l3-so-assignments/4-scheduler/checker-lin$ LD_LIBRARY_PATH=. valgrind --tool=memcheck --track-origins=yes --leak-check=full _test/run_test 16
> ==4693== Memcheck, a memory error detector
> ==4693== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
> ==4693== Using Valgrind-3.15.0.GIT and LibVEX; rerun with -h for copyright info
> ==4693== Command: _test/run_test 16
> ==4693==
> Killed
>
>
> Multumesc
> _______________________________________________
> http://ocw.cs.pub.ro/courses/so/info/lista-discutii
>
>
>
>
> --
> Răzvan Crainea
>
>
> _______________________________________________
> http://ocw.cs.pub.ro/courses/so/info/lista-discutii



-- 
Cu stimă,
Mihai Bărbulescu


More information about the so mailing list