aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
zbyerly_ has quit [Remote host closed the connection]
zbyerly_ has joined #ste||ar
vamatya has quit [Ping timeout: 246 seconds]
zbyerly_ has quit [Remote host closed the connection]
zbyerly_ has joined #ste||ar
hkaiser has quit [Quit: bye]
mars0000 has joined #ste||ar
zbyerly_ has quit [Remote host closed the connection]
zbyerly_ has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
zbyerly_ has quit [Remote host closed the connection]
zbyerly_ has joined #ste||ar
ajaivgeorge has quit [Ping timeout: 246 seconds]
ajaivgeorge has joined #ste||ar
zbyerly_ has quit [Remote host closed the connection]
zbyerly_ has joined #ste||ar
EverYoung has quit [Ping timeout: 246 seconds]
zbyerly_ has quit [Ping timeout: 258 seconds]
vamatya has joined #ste||ar
mars0000 has quit [Quit: mars0000]
Matombo has joined #ste||ar
Matombo has quit [Remote host closed the connection]
<heller>
K-ballo: as said, it makes sense given the same look of template <typename Concept> Concept foo(Concept c1, Concept c2) and Concept foo(Concept c1, Concept c2)
<K-ballo>
I don't think that holds, short form is equivalent to unnamed concepts, not the signature you have above
<K-ballo>
btw those two are completely different, by today's rules
<heller>
grr, i meant to Write template <Concept C> C ...
<heller>
after giving it a bit more thought, I think the short form is a mistake either way
<heller>
void foo(Bar&& b); is already unclear enough. why overload it even more...
<K-ballo>
still different with template <Concept C>, not completely, but significantly
<K-ballo>
the saddest part is that the same-type short form design is entirely based on iterator pairs :/
<heller>
Sure, i get that. the whole concept is designed around the STL
<heller>
(pun intended)
<hkaiser>
introducing the short form post-hoc may change the meaning of existing code...
<heller>
i'd argue that the short form isn't necessary at all
<K-ballo>
that's not what I mean
<K-ballo>
they didn't know where to go same-type or not, so they tossed an empyrical coin and it landed in iterator pairs are used all over the STL
<K-ballo>
those same iterator pairs we want to move away from
<heller>
in the end, it probably doesn't matter, for most use cases, if it is same-type or not
<heller>
I think there are shortcomings for either decison
<K-ballo>
the one that worries me most is the don't-touch-short-form guidelines already forming
<K-ballo>
you may not even have two, you might as well just have void sort(Sortable c); already frowned upon
<heller>
i think the biggest issue with the multiple-type solution is the return type deduction
<K-ballo>
same-type does the same return type deduction multiple-type would
<heller>
and the biggest issue with the short form in general, is that it requires more context to grasp the meaning of a function declaration
<heller>
only if the return type isn't deducable by the arguments passed in
<heller>
(at least from what I understood)
<K-ballo>
oh, my bad, they do?
<K-ballo>
yikes
<heller>
Concept foo(Concept): same-type, the return type is clear. multiple-type: something like auto, but constrained to Concept?
<K-ballo>
I thought same-type did implicit but constrained too, but you are right it doesn't
<K-ballo>
that changes things, I could see a rule that uses terse syntax only for return types in signatures
<K-ballo>
it'd be consistent with Concept var = fun();
<heller>
yes
<K-ballo>
but the minute you add a Concept parameter, the model breaks
<heller>
I'd probably refrain from using the short form alltogether in library code
<heller>
but that doesn't really solve the return type issues
<K-ballo>
I've barely played with concepts, for my variant, but my initial approach was to only use terse to replace an auto
<K-ballo>
a conservative approach
<heller>
yeah, in that scenario, the same-type approach of course doesn't make sense at all
<K-ballo>
it rules out using them in generic lambdas, but I don't have any because C++11 :P
<K-ballo>
I also don't have auto multideclarations, as I have no clue what those do
<heller>
i also find it allienating that concept packs should all be deduced to the same type
<K-ballo>
but they don't.. do they?
<heller>
that's what I got out of the discussion
<K-ballo>
that'd be so wrong.... yet consistent with the rest
<K-ballo>
heh, for once I'm happy with an inconsistency :P
<heller>
;)
<heller>
see: conclusion, short form is overly confusing and should be avoided ;)
<K-ballo>
yeah... but that means no Concept var = fun();
<heller>
unless fun returns auto
<K-ballo>
and I'm not going to change that into static_assert(requires decltype(..
<heller>
auto(Concept), maybe?
<heller>
sounds silly
<heller>
so, if I got it right, the short form can be seen to constrain the usage of auto as either return type or function parameter type, correct?
<K-ballo>
it could be, it's not what the TS does
<K-ballo>
there's also the potential inconsistency with non-type template params
<heller>
right
<K-ballo>
template <auto X, Concept Y>
<heller>
where is the inconsistency?
akheir_ has quit [Remote host closed the connection]
<heller>
I get the feeling that the whole specification is turning more and more in a vector<bool> kind of scenario
<heller>
aka: a total mess
EverYoung has joined #ste||ar
<zao>
I see that my informal plea to the WG still holds, "please don't ruin the language more" :)
<K-ballo>
X is a value, Y is a type
<heller>
ok
<heller>
hmm
<heller>
that sucks, who needs auto for non-type templates anyway ;)?
<heller>
can't you do similar with type deduction guides?
aserio has joined #ste||ar
<K-ballo>
we do
<K-ballo>
andrew was consulted back when auto for NTTP was proposed
<K-ballo>
I wonder what he had in mind
<heller>
I sometimes wonder what would happen if this whole group would reboot C++, completely start over with a new language with everything we know now
<heller>
would we end up in a similar mess?
<heller>
or is it really not as bad as I think it is?
<K-ballo>
we would end up with nothing
<K-ballo>
we wouldn't agree on anything less than perfect
<K-ballo>
C++ survives because we learn to deal with the flaws
hkaiser has quit [Quit: bye]
hkaiser has joined #ste||ar
bikineev has joined #ste||ar
<heller>
ok
Matombo has joined #ste||ar
pree has quit [Read error: Connection reset by peer]
hkaiser has quit [Quit: bye]
denis_blank has joined #ste||ar
hkaiser has joined #ste||ar
<hkaiser>
heller: do you still plan to try Chris' stack overflow detection?
<heller>
hkaiser: I plan to, yes
<heller>
sorry, lost track of it ...
<hkaiser>
heller: ok
<heller>
let me do this now
<hkaiser>
sure, np
<heller>
thanks for the distraction
<hkaiser>
lol
pree has joined #ste||ar
pree has quit [Ping timeout: 248 seconds]
<hkaiser>
K-ballo: what would you think of making future variadic?
<hkaiser>
i.e. future<T...>
<K-ballo>
it's a bit weird, but it seem to fit (after glancing at bryce's paper)
<hkaiser>
nod
<K-ballo>
I should take a look in more detail, like what does get return, etc
<hkaiser>
especially with a proper (implicit or explicit) unwrapped
<K-ballo>
there should be some correspondence with destructured binding declarations
<hkaiser>
get should return a tuple (i.e. what's actually stored in the future)
<hkaiser>
we could keep the unary version separate and unchanged, btw
aserio has quit [Ping timeout: 246 seconds]
<K-ballo>
IIRC MC2 is strongly opposed, I should hear him out
<K-ballo>
matt calabrese
<hkaiser>
well, auto [v1, v2] = future<int, int>().get() should work for sure
aserio has joined #ste||ar
<hkaiser>
K-ballo: what is Matt opposed to?
<K-ballo>
yeah, I'm imagining more like `future<int, int>` being actually some sugar for `future<tuple-or-some-other-decomposable-thingy<int, int>>`
<K-ballo>
the variadic future
<hkaiser>
ok, I'll talk to him
<hkaiser>
well, future<tuple<int, int>> should be something differen tfrom future<int, int>
ajaivgeorge has joined #ste||ar
<hkaiser>
and I agree that future<T...> should store it's value as a tuple<T...>
<ajaivgeorge>
hkaiser: I am online now.
<K-ballo>
> cppsage 12:18
<hkaiser>
ajaivgeorge: wanted to comment on your last commit
<K-ballo>
> yeah but only in the details and how they treat void and how bryce treats typedefs in the 0th and 1-arg cases (they mean something totally different than the N case)
<hkaiser>
ajaivgeorge: using explicit T(0) as the init for the reduce is wrong as it will work only for op == plus<T>()
<hkaiser>
it will break for op == multiplies<T>()
<zao>
Identity element in rings, eh?
<hkaiser>
right
<ajaivgeorge>
was thinking of only plus. hmm, but passing init to each dispatch results in wrong answers with plus
<hkaiser>
ajaivgeorge: then the underlying algorithm needs to be changed in a way making it independent of init
<hkaiser>
which is possible
<ajaivgeorge>
should i change the current non segmented reduce function object or create a new one?
<hkaiser>
ajaivgeorge: also, looks like the existing segmented transform_reduce is wrong as well :/
<ajaivgeorge>
was just typing that
<ajaivgeorge>
i based reduce on that
EverYoung has quit [Remote host closed the connection]
<hkaiser>
ajaivgeorge: so if you fix yours we can fix transform_reduce as well
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
denis_blank has quit [Quit: denis_blank]
<ajaivgeorge>
ok, i only need a version of the function object which doesn't use the init parameter at all.
EverYoung has joined #ste||ar
<ajaivgeorge>
but i cant default parameter the current functor since init can be anything and i cant make a function which decides not to use init based on a special value of init. So I may have have to create a separate function object for both and use that in both reduce and transform_reduce, such that the first call to dispatch uses existing functor and the next calls use the other new functor
EverYoung has quit [Ping timeout: 246 seconds]
<ajaivgeorge>
or use the current approach of storing init in overall result initially and then calling only the new function object after that in dispatch. Not sure if this will work for all op, but it should.
<hkaiser>
ajaivgeorge: you're right about having an algorithm implementation not using init
<hkaiser>
if you had that, then you only needed to pass init to one of the segments
<ajaivgeorge>
yea i could just do overall_result = init in sequential and segments.push_back(init) in parallel and the use disptach to call the new functor after that. I will implement this in both by tomorrow. Working on parallel segmented find_end now.
pree has joined #ste||ar
kxkamil has left #ste||ar ["Leaving"]
EverYoung has joined #ste||ar
aserio has quit [Ping timeout: 268 seconds]
<hkaiser>
K-ballo: SG1 is pulling the plug on teh concurrency TS :/
<K-ballo>
what does that mean?
<K-ballo>
it won't be merged? it won't be further develop either?
<K-ballo>
that was kinda expected, with all the stuff happening lately
ajaivgeorge has joined #ste||ar
<hkaiser>
nod
<hkaiser>
whic will delay things even further
<K-ballo>
fresh start to experiment is a good thing though
<K-ballo>
the TS can serve as spec for the time being
<hkaiser>
except of make_ready_future et.al.
hkaiser has quit [Quit: bye]
Matombo has quit [Ping timeout: 248 seconds]
bikineev has quit [Ping timeout: 248 seconds]
parsa[[[w]]] is now known as parsa[w]
david_pfander has quit [Ping timeout: 240 seconds]
taeguk has joined #ste||ar
Matombo has joined #ste||ar
aserio has joined #ste||ar
<taeguk>
Excuse me, when I need to use "thread" per core for implementing parallel algorithm, what should I do?
<taeguk>
In detail, what I do is that creating a fixed number of threads and the additional data is got by each threads when they need more data to process.
<taeguk>
static_partitioner is not adjusted to what I do.
<taeguk>
In the algorithm(parallel::partition) which I'm implementing, the task can get more partitions dynamically.
<taeguk>
sorry
<taeguk>
In the algorithm(parallel::partition) which I'm implementing, the task have to get more partitions dynamically.
eschnett has joined #ste||ar
<heller>
does anyone have a nice (multithreaded) benchmark at hand that is usable to explain the overheads of a future?
<heller>
anything better than fib(), preferably
hkaiser has joined #ste||ar
<hkaiser>
heller: don't we have something in tests?
<heller>
diehlpk_work: once merged to master, it will be pushed to the docker repo
<heller>
diehlpk_work: on the next HPX build, you'll get the updated build environment
<heller>
no need to build anything manually
<heller>
https://packages.debian.org/stretch/clang <-- that's the one in clang stable, so no need to add the extra repo, unless you feel like updating to a newer clang makes sense
<heller>
this is what we get on stack overflow now
<hkaiser>
nice
<hkaiser>
very helpful
<heller>
indeed
<diehlpk_work>
heller, I am using this one here FROM stellargroup/hpx:dev
<diehlpk_work>
It is the configuration form Martin.
<diehlpk_work>
I only added new packages to apt-get
<heller>
how do you generate it? How is it pushed to the docker repo?
<diehlpk_work>
heller, I download the stellargroup/hpx:dev inside the circle-ci add my packages, install cudam compile hpxcl, and run the tests. The hpcl image is not pushed to docker
<heller>
trying to install cuda already fails for me
<diehlpk_work>
Thanks, I will try to compile hpxcl again after that
<heller>
and get rid of the apt-get update
<heller>
and please, use PRs ... there is no need for all that noise on the master branch
<diehlpk_work>
Sure, will do. First I assumed it is more easy as I thought
<K-ballo>
there's no such thing
<heller>
diehlpk_work: next time, if you need updated packages, you can just rebuild the docker_build_env master branch on circle-ci. This will push a new image with the updated packages. then, you can rebuild the HPX master branch, which updates the HPX docker image
eschnett has quit [Ping timeout: 255 seconds]
<diehlpk_work>
Ok, I will do that
<heller>
you should have the approprate permissions to do that on circle-ci
<heller>
can you see the rebuild button?
<diehlpk_work>
No, not for the docker_build_env
<diehlpk_work>
For Ste||ar group I have access to hpx and hpxcl
<heller>
Ok
<hkaiser>
diehlpk_work: under Projects click 'Add Project'
<diehlpk_work>
Ok, I can see it now
<hkaiser>
and then 'Follow Project' next to the docker_build_env
<diehlpk_work>
hkaiser, Thanks it is working now
akheir has joined #ste||ar
zbyerly_ has quit [Remote host closed the connection]
akheir has quit [Remote host closed the connection]
zbyerly_ has joined #ste||ar
<diehlpk_work>
hkaiser, Finished the first draft of the HPX Peridynamic Code and the guys here are impressed of what HPX could do
<diehlpk_work>
Especially the parallel for loop
<hkaiser>
cool!
<diehlpk_work>
Most complicated part is now to get a parallel solver integrated in HPX
<diehlpk_work>
And optimize the code with more hpx features
<diehlpk_work>
But we can discuss during my visit
<hkaiser>
diehlpk_work: ok, let's plan for this
<hkaiser>
diehlpk_work: if John gets the resource_manager under control we should be able to make this integration possible
<diehlpk_work>
Cool, this would save us time to write a parallel solver in HPX