00:23
parsa has joined #ste||ar
02:15
<
github >
hpx/performance_optimizations d0e95d2 Hartmut Kaiser: Speeding up accessing the resource partitioner and the topology info...
02:16
<
Guest96225 >
[hpx] hkaiser opened pull request #3015: Speeding up accessing the resource partitioner and the topology info (master...performance_optimizations)
https://git.io/vFH4G
02:40
jaafar_ has quit [Ping timeout: 268 seconds]
02:55
diehlpk has quit [Ping timeout: 248 seconds]
03:00
hkaiser has quit [Quit: bye]
04:11
K-ballo has quit [Quit: K-ballo]
08:14
parsa has quit [Quit: Zzzzzzzzzzzz]
10:07
<
github >
hpx/gh-pages 373887c StellarBot: Updating docs
11:33
<
github >
hpx/performance_optimizations 7afcce6 Hartmut Kaiser: Speeding up accessing the resource partitioner and the topology info...
12:38
hkaiser has joined #ste||ar
12:42
<
github >
hpx/master 05a2335 Hartmut Kaiser: Merge pull request #3008 from STEllAR-GROUP/remove-get_full_machine_mask...
12:43
<
github >
hpx/master 0fc026b Mikael Simberg: Silence warning about casting away qualifiers in itt_notify.hpp
12:43
<
github >
hpx/master 33813a0 Hartmut Kaiser: Merge pull request #3012 from msimberg/silence-const-cast...
13:33
<
hkaiser >
do you have the full pdf?
13:34
<
heller >
Not right now
13:34
<
heller >
Have to log on to the university network
13:34
<
heller >
Remind me tomorrow
13:34
<
hkaiser >
pls do ;)
13:34
<
heller >
Of to the swimming pool :p
13:38
<
hkaiser >
heller: I've got it
13:41
<
heller >
hkaiser: those guys are actively working on a real time scheduler now
13:41
<
hkaiser >
yah, the conckusions are that the overheads are too high for their use cases
13:42
david_pfander has joined #ste||ar
13:43
david_pfander has quit [Client Quit]
13:47
<
heller >
hkaiser: difficult situation
13:47
<
hkaiser >
any idea what that guys means?
13:47
<
heller >
hkaiser: traditionally their workload is small due to small CPUs.
13:47
<
heller >
Hen and egg problem
13:48
<
heller >
Real time for multi core is an unsolved problem
13:50
<
heller >
hkaiser: re reddit post. Seems like a premature optimization person
13:51
<
heller >
Looking for a solution for something he thinks is bad without even saying what he wants
13:51
<
heller >
Or currently has
13:51
<
hkaiser >
writing an answer saying exactly this ;)
14:10
K-ballo has joined #ste||ar
15:23
denisblank has joined #ste||ar
15:36
parsa has joined #ste||ar
15:53
parsa has quit [Quit: Zzzzzzzzzzzz]
15:55
parsa has joined #ste||ar
16:14
eschnett has quit [Quit: eschnett]
17:04
<
github >
hpx/fixing_dataflow b1565b4 Hartmut Kaiser: Adding atomic 'finished' flag....
17:05
<
hkaiser >
denisblank: yt?
17:25
parsa has quit [Quit: Zzzzzzzzzzzz]
18:03
<
denisblank >
hkaiser: yes
18:17
<
zao >
Ah, saw that you already found the PDF, bah.
18:18
<
hkaiser >
denisblank: bbiab, sorry gotta run now
18:18
<
hkaiser >
denisblank: see the PR
18:25
<
denisblank >
hkaiser: ok
18:43
eschnett has joined #ste||ar
18:45
<
heller >
wash: hpx takes a lock when scheduling tasks :p
18:46
<
hkaiser >
heller: only when scheduling threads
18:47
<
hkaiser >
denisblank: back now
18:49
jaafar_ has joined #ste||ar
18:55
eschnett has quit [Quit: eschnett]
19:25
parsa has joined #ste||ar
20:54
diehlpk has joined #ste||ar
21:03
<
heller >
hkaiser: sure, that's what I wrote. The OPs initial assessment that locks are bad thing is what's to blame
21:04
<
heller >
I'm still wondering about how to get rid of the thread_map_...
21:05
<
heller >
Incidentally, mikael stumbled over the very same question
21:29
<
diehlpk >
heller, Have you ever debugged over shh with circle-ci?
21:29
<
diehlpk >
I try to access the docker image but can not find the build folder?
21:33
Smasher has quit [Ping timeout: 246 seconds]
21:38
Smasher has joined #ste||ar
21:41
<
hkaiser >
heller: no, you wrote that a lock is acquired while scheduling a task
21:50
denisblank has quit [Quit: denisblank]
22:10
<
diehlpk >
hkaiser, Any idea about the circle-ci issue?
22:10
<
hkaiser >
diehlpk: what issue?
22:10
<
diehlpk >
It is quite strange that the hpx examples are running, but not PeriHPX
22:11
<
diehlpk >
terminate called after throwing an instance of 'hpx::detail::exception_with_info<hpx::exception>'
22:11
<
diehlpk >
what(): description is nullptr: HPX(bad_parameter)
22:13
<
hkaiser >
does it run locally for you?
22:14
<
diehlpk >
Yes, on Fedora and Ubuntu
22:14
<
hkaiser >
hmm, how do I build this?
22:15
<
diehlpk >
Check out the repo and build yaml-cpp
22:16
<
hkaiser >
give me some time for this pls
22:16
<
diehlpk >
blaze and blaze-iterative is shipped within
22:16
<
diehlpk >
No problem
22:16
<
diehlpk >
It is not important
22:16
<
diehlpk >
By the way, you are now third author of the PeriHPX paper
22:16
<
hkaiser >
diehlpk: have you tried ssh'ing into the circleci instance?
22:17
<
zao >
For us playing along at home, what repo is that?
22:17
<
diehlpk >
Yes, started the docker image from build and run the hpx example there and it is working
22:17
<
diehlpk >
Running my example results in this strange error
22:18
<
hkaiser >
I meant to directly ssh into the circleci instance as it builds things
22:18
<
diehlpk >
I was only able to run the docker image when the circle.yaml is finished
22:19
<
diehlpk >
zao, It is a private repo, but I can add you
22:19
<
diehlpk >
hkaiser, ssh alows you to access the node
22:20
<
diehlpk >
But docker let me load the image only when all things are finished
22:20
<
hkaiser >
at the top of the circleci site is a tab 'Debug via SSH'
22:20
<
diehlpk >
I did this
22:21
<
diehlpk >
ssh + docker run -v $PWD:/hpx -w /hpx/ -i -t peridhpx:build_env /bin/bash
22:22
<
diehlpk >
In the docker image, I was running the fibonacci example and hello_world
22:22
<
diehlpk >
These run perfectly
22:22
<
diehlpk >
When running my own examples always this error shows up
22:22
<
diehlpk >
We had the same for HPXCL
22:23
<
diehlpk >
I assume that it maybe relates to the cloned containers from the hpx dev docker image
22:26
<
diehlpk >
One difference is that we hpx_main and in the other case we use hpx_init
22:54
<
zao >
diehlpk: I would be 'zao' on github, but like HK, I promise nothing :)
22:56
<
hkaiser >
diehlpk: well, run it in gdb and see where it blows up?