aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
diehlpk_mobile has quit [Ping timeout: 246 seconds]
diehlpk_mobile has joined #ste||ar
EverYoun_ has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
diehlpk_mobile2 has joined #ste||ar
diehlpk_mobile has quit [Ping timeout: 246 seconds]
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
parsa has joined #ste||ar
diehlpk_mobile has joined #ste||ar
diehlpk_mobile2 has quit [Ping timeout: 260 seconds]
diehlpk_mobile2 has joined #ste||ar
diehlpk_mobile has quit [Read error: Connection reset by peer]
diehlpk_mobile2 has quit [Read error: Connection reset by peer]
diehlpk_mobile has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
parsa has quit [Quit: Zzzzzzzzzzzz]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 240 seconds]
diehlpk_mobile2 has joined #ste||ar
diehlpk_mobile has quit [Read error: Connection reset by peer]
diehlpk_mobile2 has quit [Read error: Connection reset by peer]
diehlpk_mobile has joined #ste||ar
diehlpk has joined #ste||ar
anushi has quit [Ping timeout: 240 seconds]
diehlpk_mobile has quit [Read error: Connection reset by peer]
diehlpk_mobile has joined #ste||ar
diehlpk_mobile has quit [Read error: Connection reset by peer]
diehlpk_mobile has joined #ste||ar
diehlpk_mobile2 has joined #ste||ar
diehlpk_mobile has quit [Read error: Connection reset by peer]
diehlpk_mobile2 has quit [Read error: Connection reset by peer]
diehlpk_mobile has joined #ste||ar
jaafar has quit [Ping timeout: 240 seconds]
diehlpk_mobile has quit [Read error: Connection reset by peer]
diehlpk_mobile has joined #ste||ar
diehlpk_mobile has quit [Read error: Connection reset by peer]
<jbjnr>
heller_: or simbergm at the end of wait_or_add_new we do a call to "bool canexit = cleanup_terminated(true);" - is this something essential we must keep?
<jbjnr>
(or is it checked somewhere else anyway)
<simbergm>
jbjnr: I don't remember 100% but I would say it can be removed (with minor changes)
<simbergm>
scheduling_loop.hpp:675 checks the result but then calls cleanup_terminated again, would be cleaner to separate the two
<simbergm>
so that wait_or_add_new returns true if there is no more work to be done (and ignores terminated threads)
<simbergm>
but then you're removing wait_or_add_new so...
<jbjnr>
exactly
<simbergm>
so scheduling_loop.hpp:675 could probably do completely without the if (wait_or_add_new...) and it looks like it would do the right thing
<hkaiser>
\o/ removing code is fun!
<simbergm>
then get_next_thread would only return false if there is no more work to do, so no need to check (almost) anything after that, just make sure terminated threads are cleaned up
<simbergm>
yep, it sure is
<hkaiser>
jbjnr: I might be able to come toyour workshop in June
<jbjnr>
\o/
<jbjnr>
we arranged it to be right after the C++ meeting, to encourage people like yourself
<hkaiser>
you'll need to give me more information what's expected of me
<Anushi1998>
Why serialization is added?When are we transmitting objects from one locality to another?
simbergm has joined #ste||ar
<hkaiser>
serialization of a type is needed to be able to pass an instance of that type to an action (or to return it from one)
<zao>
This is your friendly reminder that the indian term "a doubt" would be "a question" in the western world. ;)
<hkaiser>
zao: heh
<K-ballo>
this example seems to only operate within one locality, but it does invoke an action remotely on itself
<K-ballo>
looks like it might just be a bad example../
<hkaiser>
Anushi1998: we don't know at compile time whether a particular action will be invoked remotely or not, so we have to assume that it will be
<hkaiser>
K-ballo: not surprising
<Anushi1998>
hkaiser: Okay, thanks :)
<Anushi1998>
zao: Sure!I will keep in mind :)
jakub_golinowski has joined #ste||ar
jakub_golinowski has quit [Client Quit]
mcopik has joined #ste||ar
<github>
[hpx] sithhell created pipeline_example (+1 new commit): https://git.io/vxUk4
<github>
hpx/pipeline_example 054103f Thomas Heller: Adding Pipeline example...
<github>
[hpx] sithhell opened pull request #3237: Adding Pipeline example (master...pipeline_example) https://git.io/vxUkz
mcopik has quit [Ping timeout: 256 seconds]
mcopik_ has joined #ste||ar
<github>
[hpx] biddisco created remove-schedulers (+3 new commits): https://git.io/vxULN
<github>
hpx/remove-schedulers 954a5ca Mikael Simberg: Remove hierarchy, periodic priority and throttling schedulers
<github>
hpx/remove-schedulers ed96973 Mikael Simberg: Clean up documentation for using schedulers
<github>
hpx/remove-schedulers 0048169 Mikael Simberg: Remove ensure_hwloc function (leftover from making hwloc compulsory)
<K-ballo>
woa, so much removing
<jbjnr>
simbergm: I have rebased your remove_schedulers branch onto latest master and removed the conflict. aha. I pushed to stellar instead of your repo.
<jbjnr>
fixed
<jbjnr>
I want to kill off these schedulers asap, they make my cleanup harder!
parsa has joined #ste||ar
<K-ballo>
what was the scheduler that relied on breaking boost.atomic?
<jbjnr>
???
<jbjnr>
don't remember that one - should I look for it somehow?
<K-ballo>
neh, I just wanted to know if it is one of the removed ones
<K-ballo>
there was one scheduler that relied on atomics of non trivially copyable types
<K-ballo>
so it had to keep using old boost::atomic, which did not diagnose
<K-ballo>
and is the only reason we still keep boost.atomic around
<jbjnr>
If I just search for boost::atomic in the schedulers ...
<K-ballo>
you should find some lockfree deque_node
<K-ballo>
but I just tried searching and could not identify anything
<jbjnr>
simbergm: crap - I messed up the rebse of your branch - fixing it now
<hkaiser>
K-ballo: ABP scheduler
<K-ballo>
yes! that's the one
<jbjnr>
ABP interesting. I use bits of that
<K-ballo>
is it gone now?
<K-ballo>
ouch
<hkaiser>
K-ballo: there is a PR
<jbjnr>
I use the abp_fifo/lifo in my scheduler
<hkaiser>
#3186, looks like it does not touch the abp stuff, though :/
<jbjnr>
take tasks from hot end, steal from cold end, to improve cache reuse
nikunj has joined #ste||ar
<jbjnr>
that's the PR I just broke :)
<hkaiser>
jbjnr: in my experience the added cost of the abp scheduler amortizes the cache benefits
eschnett has joined #ste||ar
<hkaiser>
YMMV
<jbjnr>
I have a flag for turining it on and off. timings will be given when I'm done with all this tweaking
<hkaiser>
cool
<simbergm>
jbjnr: thanks! I didn't realize it had conflicts, I can also do(/continue) the rebase if you want
<jbjnr>
just pushed the fixed branch - if it passes pycicle tests, I'll merge it
<simbergm>
okay, nice
<simbergm>
but wait, are you still running pycicle with the cades config PR? I was running pycicle just for some PRs but your instance still gives bad results
<github>
hpx/std-atomic 2e99d24 Agustin K-ballo Berge: Replace the last usages of boost::atomic
diehlpk_work has joined #ste||ar
sharonlam has joined #ste||ar
<sharonlam>
diehlpk_work: what is the full name for EMU nodal discretization?
<diehlpk_work>
sharonlam, I do not know
<diehlpk_work>
EMU stands for a name of a code
EverYoung has joined #ste||ar
Anushi1998 has joined #ste||ar
<sharonlam>
I see thx
<diehlpk_work>
To all our GSoc students: Have you seen the note about working hours at the GSoc student faqs?
<diehlpk_work>
How much time does GSoC participation take?
<diehlpk_work>
You are expected to spend around 30+ hours a week working on your project during the 3 month coding period. If you already have an internship, another summer job, or plan to be gone on vacation for more than a week during that time, GSoC is not the right program for you this year.
hkaiser has joined #ste||ar
<sharonlam>
yea I heard that people work full-time
<zao>
75% fulltime, nice.
aserio has quit [Ping timeout: 256 seconds]
aserio has joined #ste||ar
EverYoung has quit [Ping timeout: 256 seconds]
<verganz>
according to my experience in conversation with GSoC'17 participant, he agreed with the mentor that during the exams he could spend less time on the project, but he could compensate this time after the examinations
<verganz>
I am in the same situation, because I studying in same university, we have exams on June
<diehlpk_work>
Yes, when you need to work less for one or two weeks, this is ok
<verganz>
Ok, that's what I'm talking about
<jbjnr>
tests.unit.parallel.executors.executor_parameters always fails on my laptop.
mcopik_ has quit [Read error: Connection reset by peer]
<jbjnr>
is that a new thing or should I expect it?
<jbjnr>
(It times out)
<sharonlam>
diehlpk_work: what is the scope of the material for the peridynamics project? (homogeneous/brittle/density/etc.)
<diehlpk_work>
verganz, please read my private message
<diehlpk_work>
sharonlam, Just one easy material
<diehlpk_work>
The focus is on the parallel computing and load balancing
<jbjnr>
verganz: our experience has been that students frequntly say they'll work hard (after exams for example), but in reality can be quite relaxed about the definition of 'hard'
<sharonlam>
yea it's really easy to lose the focus
<diehlpk_work>
sharonlam, Implementing the PMB model is sufficient
<diehlpk_work>
Adding new models is not complicated
<sharonlam>
diehlpk_work: ahaa, so no need to worry about choosing delta and other parameters
Anushi1998 has quit [Ping timeout: 256 seconds]
<diehlpk_work>
No, it is really on parallel computing
<diehlpk_work>
You should understand the principle of this model
<diehlpk_work>
So you know what you implement, but do not think about the mathematical issues
<sharonlam>
great to know, I was pretty intimidated by the physics proof at first
<simbergm>
jbjnr: I've seen that fail on rostam, but very rarely
<diehlpk_work>
No, just understand the basic principle, like neighbor search and exchnage of forces
<simbergm>
and I'm never sure if it's really that test that's broken or something else in hpx
<zao>
I can't test anything, my machines are all offline thanks to ISP troubleshooting \o/
<sharonlam>
diehlpk_work: yea you basically summed up the general picture. I'll start with ignoring the tiny functions enhancing the accuracy
<jbjnr>
simbergm: ok. it fails every time for me, so it must be something fundamental
<jbjnr>
I'll have a look when I get a moment
<jbjnr>
just testing now that I've removed wait_or_add_new
<zao>
Is that macOS or another OS?
<jbjnr>
zao: linx laptop
<jbjnr>
^u
<sharonlam>
does hpx provide mechanisms to do load balancing?
<jbjnr>
sharonlam: of meshes/data? not directly. You can tell it to move something, but you have to decide what to move, we don't have a general purpose partitioning algorithm anywhere
<jbjnr>
but if you want a summer project .... zoltan reimplemtned in hpx - now that would be awesome
<sharonlam>
for example if I want to do a 1-D array addition, and partition it in different localities, how can I sum the subtotal with hpx?
mcopik has joined #ste||ar
<sharonlam>
sorry if it's not clear or too naive, I'm really new to this field
<galabc>
is for_each actually an algorithm from HPX? If it is, how is it different from HPX for_loop algorithm?
<zao>
Both the SC++L and we have a for_each, IIRC.
<K-ballo>
HPX's parallel for_each corresponds to C++17 parallel std::for_each
<K-ballo>
for_loop is a proposed extension in a newer parallelism TS, I believe it keeps induction variables around?
<K-ballo>
let's just say it is a lower level parallel loop construct
<galabc>
ok smart executors were used as a execution policie for for_each loop in the article
<K-ballo>
if I remember correctly, for_loop comes with its own mini DSEL and all
<galabc>
do they also work on HPX for_loops?
<K-ballo>
for_each is much much simpler, it just calls some function on each of the elements of a range
<galabc>
Do the smart executors work on for_loops or only on for_each?
<verganz>
diehlpk_work, thanks for comments, I got the idea
<verganz>
should it be more theoretical description with kind mathematical modelling or it'd be better use some code insertions etc. in proposal?
<jbjnr>
sharonlam: are you using the partitioned vector stuff? only hkaiser knows what's going on there. You'll want some kinf of reduce algorithm on top of thre partitioned vector, but I've never tried using that stuff.
<diehlpk_work>
verganz, When you say you like to have collision or contact provided. Yopu have to explain which algorithm you will use
<diehlpk_work>
Same for saying you provide boundary conditions, you have to say force or displacement condition...
<diehlpk_work>
How are these implemented
<diehlpk_work>
Everyone knows that one could implment things, but we are interested how things are done
<sharonlam>
not really, I'm going to partition a grid of 2d/3d nodes and calculate their interactive forces. I was asking the question about vector addition just to get an idea of domain partition in hpx
<sharonlam>
maybe I should think about the data structure of the grid first
<github>
hpx/master 8aee503 nikunj: Fix #3124: Build hello world client through make tests.unit.build
<github>
hpx/master a281166 Mikael Simberg: Merge pull request #3178 from NK-Nikunj/fix-#3124...
mbremer has joined #ste||ar
<mbremer>
@hkaiser: yt?
EverYoung has joined #ste||ar
<hkaiser>
mbremer: ready whenever you are
galabc has quit [Quit: Leaving]
aserio has quit [Quit: aserio]
mbremer has quit [Client Quit]
aserio has joined #ste||ar
sharonlam has left #ste||ar [#ste||ar]
K-ballo has quit [Ping timeout: 240 seconds]
EverYoung has quit [Remote host closed the connection]
jaafar has joined #ste||ar
EverYoung has joined #ste||ar
K-ballo has joined #ste||ar
<simbergm>
zao: do you remember what compiler/version/boost etc you used when testing the migrate component test?
<zao>
Clang 3.8-ish, Boost 1.65-something, on a container I think ran debian.
<zao>
I don't have access to my logs nor my compile machine, thanks to my ISP :)
<zao>
I can check in an hour or two.
<simbergm>
okay, no worries, I'm just trying to figure out if I'm seeing patterns where there are none for that test (i.e. if it fails with some specific configuration)
<simbergm>
I think I'm just seeing things
<simbergm>
or imagining things
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
<zao>
IIRC, the image also has a GCC of some sort to test with.
EverYoun_ has joined #ste||ar
EverYoun_ has quit [Remote host closed the connection]