khuck has quit [Remote host closed the connection]
shahrzad has joined #ste||ar
Yorlik has joined #ste||ar
hkaiser has quit [Quit: bye]
khuck has joined #ste||ar
diehlpk has joined #ste||ar
diehlpk has joined #ste||ar
diehlpk has quit [Changing host]
diehlpk has quit [Remote host closed the connection]
khuck has quit [Remote host closed the connection]
khuck has joined #ste||ar
shahrzad has quit [Read error: Connection reset by peer]
shahrzad has joined #ste||ar
khuck has quit []
<gonidelis>
I have submitted a proposal for GSoC 2020 on range based new algorithms implementation! I am so looking forward to your feedback either directly to the doc or with a comment right here (just make sure to tag me). I would be glad to pm my proposal to any member of the community in case they are interested. Any comments are appreciated. Please help
<gonidelis>
understand what shall improve and where to focus. Thank you all!
<gonidelis>
help me*
<gonidelis>
I ll make sure to inform hkaiser first thing tomorrow morning
gonidelis has quit [Remote host closed the connection]
gonidelis has joined #ste||ar
diehlpk_work has quit [Remote host closed the connection]
gonidelis has quit [Remote host closed the connection]
shahrzad has quit [Ping timeout: 256 seconds]
shahrzad has joined #ste||ar
baocvcv has quit [Remote host closed the connection]
baocvcv has joined #ste||ar
shahrzad has quit [Quit: Leaving]
baocvcv has quit [Ping timeout: 256 seconds]
<heller1>
nikunj: nice results!
baocvcv has joined #ste||ar
parsa has quit [*.net *.split]
jaafar has quit [*.net *.split]
parsa has joined #ste||ar
jaafar has joined #ste||ar
nikunj97 has joined #ste||ar
baocvcv has quit [Quit: Leaving]
Hashmi has joined #ste||ar
<nikunj97>
heller1, did you go through the code?
<nikunj97>
it's essentially the same as yours, except that I added a Grid abstraction over hpx::compute::vector and got rid of the iterators
<heller1>
no, sorry
<nikunj97>
and instead of going over one row in parallel for, I increased the grain size by doing multiple rows in one iteration
<nikunj97>
so the increased in grain size provided better MLUPS
<nikunj97>
and with simd it increased even furthe
<heller1>
cool
<heller1>
very nice
<heller1>
so are you at the bandwidth limit now?
<nikunj97>
I've asked a friend to benchmark the application by finding the ideal grain size and plot the rooflines for other processors as well
<nikunj97>
heller1, close to it yes
<heller1>
very nice!
<heller1>
are you still looking at a single core run, or did you switch to multiple cores already?
<nikunj97>
I'm doing both
<heller1>
ok
<nikunj97>
not I exactly, I asked him to do both
<heller1>
keep in mind, that when running on multi socket machines, and machines with hyper threads, you need to watch out what exactly you report
<nikunj97>
aah ok!
<heller1>
running with 2 threads does not always mean the same ;)
<nikunj97>
ohh crap with avx2 enabled, HPX just crashed on me lol
<nikunj97>
somewhere there was a seg fault
<heller1>
your application crashed, not HPX
<nikunj97>
yea, I meant my application
<heller1>
nikunj97: soo, one of the reasons I chose this iterator based design and not go for a grid with indexing and such, is to avoid "off by one errors" and the like
<heller1>
index based calculations are bad
<nikunj97>
I can understand your decision well now.
<nikunj97>
to me it looked cleaner so I thought let's go with the grid approach
<heller1>
it looks cleaner because you are more familiar with it
<nikunj97>
that is also true
<nikunj97>
I'm not used to writing my own iterators, so it got difficult to handle your code
<heller1>
you took a completely generic solution, and specialized it again
<heller1>
my solution should have also worked with yout grid data type
<nikunj97>
yes it will also work. Again I'm not used to working with iterators I don't completely understand. So I decided to swap them out
<nikunj97>
I will learn how to write one properly
<heller1>
and the iterators where generic enough, to adapt to any other random access, contigous iterator ;)
<heller1>
just wanted to mention that
<nikunj97>
aah, didn't know
<heller1>
what I find discouraging however is that your solution is faster ;)
<nikunj97>
haha, more grainsize is always better in hpx's case ;)
<heller1>
which kind of defeats the purpose
<heller1>
which is btw a bad thing
<nikunj97>
coz you can extract less parallelism?
<heller1>
since we are promoting low overhead threads :P
<nikunj97>
lol true
<heller1>
yes
<nikunj97>
our threads still take 1us worth of time
<nikunj97>
for a loop over 1000 elements, that's merely 5us
<nikunj97>
so that 1us shows pretty well there
<nikunj97>
5 -> 25us, my bad
<nikunj97>
but if you take let's say multiple rows in the stencil
<nikunj97>
your grain size increases to ~200us, which is more than enough to hide HPX's overhead in noise
<nikunj97>
that's why I get near serial performance
<nikunj97>
also, single core performance with simd on stencil doesn't improve much, more like 1.5-1.7x jump
<nikunj97>
but with multicore, it jumps to about 3.5x
hkaiser has joined #ste||ar
Abhishek09 has joined #ste||ar
<hkaiser>
simbergm, rori: sorry for commenting on a PR that was already merged - I should have looked more carefully earlier
<simbergm>
hkaiser no problem, thanks for commenting despite me merging it! very good catches
<rori>
thanks!
Abhishek09 has quit [Remote host closed the connection]
Hashmi has quit [Quit: Connection closed for inactivity]
<rori>
hkaiser: could you explain more the const problem ? I'm not sure I see it cause for me params.startup is not const
<rori>
thanks
<rori>
ah ok actually I understood the pb ^^
<rori>
Thansk
gonidelis has joined #ste||ar
<hkaiser>
rori: params itself is const, so are its members
<hkaiser>
rori: btw, hpx_init_params.cpp is not part of any build target (for me)
<rori>
my bad, I thought it was ok cause I could see it in hpx_init_SOURCES but apparently it is not, I'll open a PR to fix all of this
<hkaiser>
give me a sec
<hkaiser>
rori: I take that back, it is part of the hpx_init target
<rori>
ah ok great
<hkaiser>
rori: however, the header hpx_init_params.hpp is part of the hpx target, is that intentional?
<rori>
oh nope sorry !
<hkaiser>
not a big deal, I gues, merely a question of consistency
<rori>
yep I'll change that!
<simbergm>
btw, the hpx_init/wrap pr moves both of them to the hpx target
<simbergm>
you might be able to just leave it alone
<rori>
ah ok
<rori>
thanks
<simbergm>
both, as in source and header
<hkaiser>
simbergm: that's the right thing to do as the HPX_EXPORT in hpx_init_params.hpp will cause warnings otherwise (inconsistent use of dllimport/dllexport)
<hkaiser>
rori: if you find a way to handle the HPX_APPLICATION_NAME macro, that is
<hkaiser>
otoh, we can remove that alltogether, but that's for another PR, I guess
<simbergm>
yep, also all the hpx_init/start.hpp files are in the hpx target
<hkaiser>
yes
<rori>
I'll look into that :)
<simbergm>
hkaiser: can't we have a static in hpx_init_params.hpp after all? it'll be included in most cases only once and even if it's included twice things should still work if it's static
<hkaiser>
simbergm: we could do that - might be the easiest way to deal with things
<hkaiser>
alternatively we could do something similar to what we have with the prefix
<simbergm>
set_hpx_prefix?
<hkaiser>
expose a function to be called with the macro as its argument to initialize the (hidden) variable
<hkaiser>
yah
<hkaiser>
btw, hpx does not compile for me with those changes, I linker errors
<hkaiser>
rori: let me know if you would like for me to have a look
<simbergm>
sorry hkaiser
<hkaiser>
nah, no worries
<simbergm>
that's likely due to the inconsistent import/export declarations no?
<hkaiser>
my fault - I should have tried it earlier
<hkaiser>
yes
<simbergm>
ok, so rori do you mind doing a pr first just moving all hpx_init_params to the hpx target?
<simbergm>
the hpx_init/wrap pr will take longer
<rori>
sure
<simbergm>
and the other changes (move/HPX_APPLICATION_NAME) can come separately
<hkaiser>
simbergm: ok
<hkaiser>
rori: I'll try it as soon as you have it, please ping me
<Yorlik>
hkaiser: You mentioned, that certain hpx functions could make a task yielding, even before a future is returned. I am suspecting, That when I'm sending a message as fire-and-forget from a lua script which calls hpx - especially this line of code:"hpx::async<gameobject::send_message_action<M>>( id, msg );", that is migh yield and lead to that LuaState explosion I am currently trying to understand. I wonder if there is a
<Yorlik>
way to change the cited call into something which never yields and which immediately returns. Ideas?
<Yorlik>
Some parameter like "Skip Future Creation" or so might do that.
<hkaiser>
first, hpx::async is not fire&forget
<Yorlik>
Yes - I just discard the futre<void>
<hkaiser>
second, any (possibly) remote operation on components or involving actions requires to access AGAS which may suspend as it has to acquire locks
<hkaiser>
if you want a real fire&forget just use hpx::apply<Action>(id, args...)
<hkaiser>
but it still may suspend for reason (2)
<Yorlik>
Does apply involve any waiting?
<hkaiser>
no way we can guarantee it wouldn't suspend
<Yorlik>
I wonder if I should just drop my messages into a queue
<Yorlik>
And have an object handle the messages
<gonidelis>
I have already contacted hkaiser. I have submitted a proposal for GSoC 2020 on range based new algorithms implementation! I am so looking forward to the community's feedback either directly to the doc or with a comment right here (just make sure to tag me). I would be glad to pm my proposal to any member of the community in case they are
<gonidelis>
interested. Any comments are appreciated. Please help me understand what shall I improve and where to focus. Thank you all!
<hkaiser>
are you sure this is causing your state explosions?
<hkaiser>
Yorlik: ^^
<Yorlik>
It is connected to sendmessage for sure
<hkaiser>
gonidelis: we have a mentors gsoc mailing list you might want to send your proposal there
<hkaiser>
first however I need to make sure it has been updated this year, give me a bit of time to figure that out, pls
<Yorlik>
I have one idea I want to try: Put an hpx::sleep_for on tasks asking for a Lua state, when the system is hot, so other tasks get time to give back their Lua States
<Yorlik>
And not do it, when it's cooled down
<hkaiser>
Yorlik: you can look at how many states you have active and delay any further tasks until the number drops
<Yorlik>
Thats the plan
<Yorlik>
Like impose a cheap check every 20 states being given out or so
<hkaiser>
Yorlik: jbjnr has done something similiar in his guided_executor (I believe that was the name)
<Yorlik>
He mentioned a limiting executor
<Yorlik>
I can put something in my custom start function or into the pools, still meditating on it
<hkaiser>
rori, simbergm: thanks for the quick fix (#4463), that works for me
<hkaiser>
Yorlik: yah, that's the name
<simbergm>
hkaiser: good, I think you can go ahead and merge it then
<Yorlik>
I'll figure something out. In the moment it's the only logical/not bug-requiring model of it I have - tasks being swapped out too often and too fast and the only function call in my test doing that is sendmessage.
<hkaiser>
simbergm: done - thanks again rori!
<Yorlik>
Yield_while - looks like an interesting function
<Yorlik>
hkaiser: Does the age of a task play a role for it's priority?
<hkaiser>
no
<hkaiser>
it's all purely first-in/first-out
<hkaiser>
plus stealing from the lower end (newer tasks) if needed
<Yorlik>
Thats probably automatically leads to a depth-first-ish kind of traversal of the task tree, right?
<Yorlik>
I wonder if it's a sort of traversal control problem.
<Yorlik>
When I run a frame I create a ton of root nodes - each gameobject starts it's own little subtre which requires it's own LuaState
<hkaiser>
could be
<Yorlik>
Subtrees which have been started already should be prioritized, since at the end of the frame all join.
<Yorlik>
So the use of Luastates could be mnimized
<hkaiser>
Yorlik: not sure how to do that
<Yorlik>
Bit FiFo is seems to be actually good for that.
<Yorlik>
I need to think more about it.
<hkaiser>
here is an idea
<hkaiser>
we already have 'boost_thread_priority' which will set the priority of a thread to high on its first execution and will then lower it to normal if suspended
<hkaiser>
what you need is the opposite
<hkaiser>
starting a thread with lower priority, but as soon as it has started it will have normal priority if suspended
<Yorlik>
Yes - priority needs to go up with age
<hkaiser>
that should do what you want
<hkaiser>
Yorlik: we have only 3 priorities, it's a very crude system
<Yorlik>
That thread priority is about a task, right?
<hkaiser>
yes
<Yorlik>
Is there a customization point for that ?
<hkaiser>
no
<hkaiser>
you'd have to write your own HPX scheduler which I wouldn't recommend
<Yorlik>
I could use a thread local counter which increases with each object
<hkaiser>
and then?
<Yorlik>
And then dampen priority depending on that counter
<hkaiser>
there is no way to control priorities of tasks
<Yorlik>
Like a chance to reschedule if the system is hot AND the counter ishigh
<hkaiser>
nothing beyond low/normal/high
<Yorlik>
It would cost some efficiency to reschedule ofc.
<hkaiser>
Yorlik: I think the count_up/count_down is the best option right now
<Yorlik>
I need to be careful not to overkill with stupid waits
<Yorlik>
I'll come up with something.
<hkaiser>
as long as your lower threadhold is larger than zero you'd be fine
<hkaiser>
*threshold*
<Yorlik>
The general problem is resource use by tasks and how to limit that
<Yorlik>
So - I'll make the resource - my LuaState Pool - smarter
<hkaiser>
things are fine if you don't even create the task as long as you already have too many
<Yorlik>
It will yield greedy tasks with low priority
<Yorlik>
I think thread_local counters will be the way to go. That will be fun. :)
<hkaiser>
Yorlik: sure you'll need counters to count
<hkaiser>
but pls don't attempt to fiddle with task priorities
<Yorlik>
I will work on my State pool
<Yorlik>
I'll make it dodge tasks based on parameters
<Yorlik>
dodge = wait+yield
<hkaiser>
I still think you're barking up the wrong tree
<Yorlik>
What do you think then?
<hkaiser>
no idea, it's just a gut feeling
<Yorlik>
It's possible there might a condition preventing the giving back of LuaStates to work. That would be a bug. But I have tested this quite a bit and it appears to be rock solid.
<Yorlik>
Still - the system is so young - it might be all bugs.
gonidelis has quit [Ping timeout: 240 seconds]
gonidelis has joined #ste||ar
<hkaiser>
gonidelis: your proposal looks very good!
diehlpk_work has joined #ste||ar
<diehlpk_work>
hkaiser, I made our course CiC conform and added all their requirements
<gonidelis>
hkaiser thank you for your time. I am glad you liked it. I will wait for further suggestions by the weekend on my email ;) Thank you!
<gonidelis>
what about the mentors mailing list? Where could I find it? I only know the community mailing list adress thus far...
<hkaiser>
gonidelis: I'll find out for you, need more time to make sure everybody is subscribed etc
<gonidelis>
sure. many thanks!
<Yorlik>
BTW hkaiser: Changing the call from async to apply already reduced the number of LuaStates by a factor of 5 or so
karame78 has left #ste||ar [#ste||ar]
karame7851 has joined #ste||ar
Karame7839 has joined #ste||ar
Karame7839 has quit [Remote host closed the connection]
bita has joined #ste||ar
Hashmi has joined #ste||ar
mdiers_ has quit [Ping timeout: 264 seconds]
<hkaiser>
Yorlik: interesting, note however that apply has no means of reporting (remote) errors
<Yorlik>
I am trying a very simple thing now: Adding spare lua states when the freelist of the pool runs dry. That is a throttle and a speedup for the next requests
<hkaiser>
shouldn't it be sufficient to limit the number of lua states per core?
<Yorlik>
It might work, sure. But how would I fnd the required limit?
<diehlpk_work>
gonidelis, We do not have the mentor mailing list updated, but I will do it soon
mdiers_ has joined #ste||ar
bita has quit [Read error: Connection reset by peer]
nan2 has joined #ste||ar
gonidelis has quit [Remote host closed the connection]
Abhishek09 has joined #ste||ar
bita has joined #ste||ar
rtohid has joined #ste||ar
diehlpk_work_ has joined #ste||ar
diehlpk_work has quit [Ping timeout: 240 seconds]
<Yorlik>
hkaiser: I know still have bursts of lua state creation, but the overall count stays below 1000 again (it was up to 5000 and more).
<Yorlik>
Which means I am giving tasks Lua States faster, so they don't linger around that much and in case the system is hot, Lua State creation takes longer, since I am creating spares when the freelist runs dry.
<Yorlik>
So -it's two means: Using only one LuaState per task (batch of object updates) and changing the resource provisioning (Lua State Pool)
<Yorlik>
I'm pretty sure there still is room for improvement, but this is a worst - not too bad - solution which does not introduce any unnecessary wairs.
<Yorlik>
waits
<Yorlik>
I think it's a problem of resonance/vibrations in a dynamic system after all.
<Yorlik>
So essentially I introduced some negative feedback.
<Yorlik>
While the task is waiting to get its lua state and more spares get created during that wait as needed, it cannot be swapped out. Load then swaps over to the other worker threads and there the same things happens.
<Abhishek09>
.
<Yorlik>
So the system is cooling down, while preparing to handle more load in the same moment.
nikunj97 has quit [Ping timeout: 240 seconds]
nan9 has joined #ste||ar
nan2 has quit [Remote host closed the connection]
maxwellr96 has quit [Read error: Connection reset by peer]
Hashmi has quit [Quit: Connection closed for inactivity]
nikunj97 has joined #ste||ar
ibalampanis has joined #ste||ar
<ibalampanis>
Hello to everyone, I'm Ilias Balampanis and I'm interested in GSoC project!
<zao>
Hi there, ibalampanis!
nikunj97 has quit [Remote host closed the connection]
<ibalampanis>
I'm interested in two projects. The first one is a legacy and the title is Script Language Bindings and the second one is named as Test Framework for Phylanx Algorithms
<K-ballo>
hi ibalampanis, welcome
<ibalampanis>
I'm making my proposal for these two projects right now
<ibalampanis>
Is it legal?
<ibalampanis>
zao K-ballo thanks guys!
ibalampanis has quit [Remote host closed the connection]
ibalampanis has joined #ste||ar
<ibalampanis>
I disconnected without a specific reason.. Lost something?
<ibalampanis>
bita could i send to you my draft proposal tomorrow?
<zao>
Last thing was your "thanks guys".
<ibalampanis>
zao thanks
<jbjnr>
Yorlik: hkaiser meant the limiting_executor. It allows you to set a max number of tasks that can be 'in flight' - so you create it with (say 500 upper, 400 lower threshold), when the task count goes over 500, it blocks and then when it drops back to 400 it allows another 100 to be spawned.
<jbjnr>
not sure if the latest version is in master. I have some local changes here
<jbjnr>
sorry. didn't see that hkaiser already referenced it
<Yorlik>
jbjnr: I'll definitely look into that.
<Yorlik>
In the moment I'm playing around a lot with stuff and trying out thigs to get a better feeling for that system I created.
<jbjnr>
The version in master might be dodgy. I gave up doing PRs because it's too much work to fix all the changes I get asked for.
<Yorlik>
Aww ...
<jbjnr>
But I can send uoi my version
<jbjnr>
* But I can send you my version
<Yorlik>
How big is the difference to the one in master?
<jbjnr>
no ide. havn't looked at it for months
<Yorlik>
IC.
<Yorlik>
But sure - I'd be interested to have a look at it and experiment with it. I still am not sure what's the best solution for this problem that came up.
Abhishek09 has quit [Remote host closed the connection]
ibalampanis has quit [Remote host closed the connection]
ibalampanis has joined #ste||ar
nan9 has quit [Remote host closed the connection]
<simbergm>
hkaiser: thanks! upvoted
<bita>
Hi ibalampanis, it is good to see you here. I think a student can write proposals for more than 1 project (please correct me if I am wrong diehlpk_mobile[m )
<hkaiser>
bita: that should be fine
<bita>
Of course, I will be happy to see your proposal. How does compiling HPX and Phylanx go?
<bita>
thanks hkaiser :) so it is fine and legal
karame7851 has quit [Remote host closed the connection]
<hkaiser>
well, the student will not be accepted for more than one proposal, however
<hkaiser>
also, I'd rather spend more time on a single proposal than to create two half-baked ones
<bita>
got it. Please read this ^^ ibalampanis
Abhishek09 has quit [Remote host closed the connection]
<diehlpk_mobile[m>
Yes, I agree better one good proposal instead of two semi good ones
<diehlpk_mobile[m>
ms[m] yet?
<diehlpk_mobile[m>
jbjnr ?
<diehlpk_mobile[m>
Anyone interested in helping out with the Google Season of Doc proposal?
<simbergm>
diehlpk_work: maybe, when is the deadline
<simbergm>
?
Amy1 has quit [Ping timeout: 240 seconds]
<hkaiser>
simbergm: late May, I think
Abhishek09 has joined #ste||ar
<hkaiser>
bita: I will be a couple minutes late for our meeting
Amy1 has joined #ste||ar
<bita>
hkaiser, Okay
<jbjnr>
diehlpk_mobile: google season of docs. No thanks.
<jbjnr>
very disappointed already with the lack of interest in season of code.
<diehlpk_mobile[m>
Jbjnr we have at least few students interested in GSoC
<diehlpk_work_>
I think we will have three or four this year.
ibalampanis has quit [Remote host closed the connection]
<Yorlik>
I showed it to my fellow dev, but he didn't feel qualified. They hardly do C++ at University.
<Yorlik>
Helsinki that is.
ibalampanis has joined #ste||ar
<Yorlik>
They teach how computers work with assembler and then go straight to Java.
<ibalampanis>
bita hkaiser thank you for your support
<ibalampanis>
Considering your opinion, I will concentrate on making one proposal
<ibalampanis>
bita, I will send to you tomorrow via email
<ibalampanis>
Thank you guys!
<diehlpk_work_>
hkaiser, Does the YoutTube channel of stellar group still work?
<diehlpk_work_>
I have the backup of all our videos on my local disk and running out of space
<diehlpk_work_>
So do we still need all this videos?
<ibalampanis>
hkaiser Could you send me a tutorial, if exists, compiling your software?
<diehlpk_work_>
simbergm, April 13, 2020 at 20:00 UTC
<diehlpk_work_>
Mentoring organizations can begin submitting applications to Google
<hkaiser>
diehlpk_work_: I did ask Chris (Cano) two days ago, they are trying to figure things out with google
<hkaiser>
no idea how long this would take
<diehlpk_work_>
Ok, so I will keep them
<hkaiser>
I'll raise the issue again early next week
<hkaiser>
do you run short in disk space?
<diehlpk_work_>
Yes
<hkaiser>
let's buy you an external drive, then
<diehlpk_work_>
Can we order a external hard drive for me?
nan7 has joined #ste||ar
<hkaiser>
absolutely, select something you like on Amazon and send it to Katie, she will organize things to be delivered to you
<hkaiser>
they have nice and small USB 2TB disks nowadays
<bita>
ibalampanis, great. Looking forward to it
shahrzad has joined #ste||ar
<ibalampanis>
hkaiser I would like to make one more question
<hkaiser>
sure
<hkaiser>
ask away
nikunj97 has joined #ste||ar
<ibalampanis>
Searching for other interesting tasks from other organizations, I found one that I like it. Provided that I have 3 slots, could I make two for Ste||ar organization?
<simbergm>
diehlpk_work_: ok, thanks
<simbergm>
I'll try to have a look, feel free to ping me if I forget
<hkaiser>
ibalampanis: as I said above, you can sure submit 2 proposals, but you will not be funded for more than one
<ibalampanis>
Yeah, sure!
<hkaiser>
ibalampanis: also, my suggestion would be to rather flesh out one proposal and invest all of your time there, rather than disperse your time and end up submitting two half-baked proposals
<ibalampanis>
I thought again and you have right..
<ibalampanis>
Could you believe that I'm late for talking to you?
ibalampanis has quit [Remote host closed the connection]
<zao>
hkaiser: What has happened historically when multiple applications go for the same task? Separate attempts on the same project, pick one, consider a related project?
<heller1>
no
<heller1>
one had to be picked
<zao>
tricky
ibalampanis has joined #ste||ar
<heller1>
we never ran into that situation
<jbjnr>
yes we did.
<ibalampanis>
lost connection hkaiser.. last message was from me about my possible latency
<heller1>
jbjnr: oh, when?
<heller1>
looks like I forgot ...
<jbjnr>
we often get loads of very poor last minute proposals and reject any that are on a project that a decent proposal is in for
<jbjnr>
if that's what you mean
<heller1>
I mean, we had proposals for the same project, in my memory however, it always was relatively clear which one we liked most
<heller1>
yeah
<heller1>
i meant we never were in the situation where we had to choose between two strong proposals on the same project
<heller1>
since most of the time, we can steer the project proposals in the right directions ;)
<ibalampanis>
hkaiser I'm very confused about which project i like most. I decided to go for one of your mentored projects, "Scipt Language Bindings"
<hkaiser>
uhh, do you have some experience with combining c++ with other languages (like Python)?
<ibalampanis>
yeah, I have sent an email to you now
nan7 has quit [Ping timeout: 240 seconds]
<ibalampanis>
I have little experience on that but I think the most important is that I have a lot of willingness to become familiar and understand these techniques
<nikunj97>
hkaiser, just went through the survey results. Documentation is severely lacking, they say.
<hkaiser>
right
<nikunj97>
I was thinking to start a blog series for HPX
<nikunj97>
starting with hello world all the way to distributed
<nikunj97>
making use of the functionality hpx provides
<jbjnr>
nikunj97: do it
<nikunj97>
will help me use the api I never did and get better in C++ in general
<jbjnr>
(please)
<nikunj97>
jbjnr, alright! I'll start devoting time to it. I'll need someone to review my blogs before I publish it though.
<jbjnr>
ok
<heller1>
woot, the world has just gotten its first exaflop achine
<jbjnr>
heller: that's pretty impressive. I had no idea so many people were using it
ibalampanis has quit [Remote host closed the connection]
<jbjnr>
I have an idea for a new project. My choir can't meet due to covid. I should make a distributed choir practive program like zoom or something using hpx for the messaging.
<nikunj97>
nan8, what I meant to say was that it's not linking against boost_program_options
<nan8>
nikunj97 So how can I link it to boost_program_options?
karame_ has joined #ste||ar
<ct-clmsn>
where is the clang-tidy rule suite for hpx in the hpx source tree?
<jbjnr>
It would give me a good excuse to get libfabric running on windows and mac since that's what most of the choir people have
<heller1>
I actually thought about the same, but for digital classrooms, our teachers are in a total mess without tools right now
<jbjnr>
yup
rtohid has joined #ste||ar
<zao>
We used to run Mumble for low-latency voice comms, but moved to Discord for that.
<jbjnr>
zao: thanks. I'll look into mumble. might be usable for us
<zao>
We used to run umurmurd towards the end as light-weight server.
<zao>
Note that Mumble is just audio, Discord also has video.
<Yorlik>
^^ + text + code formatting
<jbjnr>
is discord the chat thingy that is everywhere now, or somethign else with the same name
<jbjnr>
(like IRC/matrix etc)
<Yorlik>
Discord is extremely popular now, yes. It has it's origins in gaming, but many programmers communities use it too.
<rori>
yes llvm has a channel ther
<Yorlik>
You can even do screensharing with it, though that is not as good as e.g. teamviewer.
karame_ has quit [Remote host closed the connection]
<Yorlik>
It's the facebook of real time and near real time communications.
<jbjnr>
why are we using this very slow matrix thingy then?
<Yorlik>
Fear of new "hip" things?
<jbjnr>
After I type, it takes 20 seconds to appear
<Yorlik>
Discord sometimes has hiccups, but it has become pretty stable.
<Yorlik>
Absolutely usable for a professional context.
Abhishek09 has quit [Remote host closed the connection]
<Yorlik>
If any of you want to try out Discord really quick I made a test server (was a 20 second task), you can check it out here: https://discord.gg/FvfcBS I will delete it shortly - it's just if people here who don't know discord want a quick safe test.
<ct-clmsn>
rtohid, so the runtime error i sent you was from the mix of clang w/libstdc++ (gnu); the error mentioned yesterday was python compiled with gcc importing the clang w/libstdc++ compiled module
<jbjnr>
Thanks.
<Yorlik>
OK - I deleted that Discord server jbjnr will do his own test for the group after seeing it. The link above is no longer valid. Cheers!
<hkaiser>
nan8: could this be a mismatch of c++ standards settings for hpx and phylanx?
<hkaiser>
ct-clmsn: so did you resolve your issues now?
<nan8>
hkaiser maybe, let me check the script for hex and phylanx
<ct-clmsn>
heller1, got clang-tidy working properly, thanks for the help!
<ct-clmsn>
rtohid, oh, i found an issue in the CMakeLists.txt file for tiramisu - they hard coded `g++` into the build file as a default compiler their CMakeLists.txt script has poorly arranged logic
<ct-clmsn>
rtohid, you may want to revisit that file in the root directory for tiramisu and make a quick patch. i'll be pushing a patch file to my phyflow branch tonight
<ct-clmsn>
rtohid, it's strange b/c they push clang/llvm in Halide but isl and tiramisu default to gcc/g++ in their build scripts
<ct-clmsn>
rtohid, annoyances but opportunities to make contributions
shahrzad has quit [Ping timeout: 240 seconds]
Abhishek09 has joined #ste||ar
nan8 has quit [Remote host closed the connection]
<nikunj97>
heller1, the seg fault was due to aligned memory load to nsimd::pack when I was doing an unaligned memory load :P
weilewei has joined #ste||ar
<rori>
* yes llvm has a server there
rtohid has quit [Remote host closed the connection]
nan6 has joined #ste||ar
weilewei has quit [Remote host closed the connection]
K-ballo has quit [Ping timeout: 240 seconds]
K-ballo has joined #ste||ar
nikunj has quit [Ping timeout: 260 seconds]
nikunj has joined #ste||ar
Abhishek09 has quit [Remote host closed the connection]
<hkaiser>
diehlpk_work_: could you ive me a link to where the octotiger build scripts and build instructions live, please?
<hkaiser>
diehlpk_mobile[m: ^^
nan6 has quit [Remote host closed the connection]
karame_ has joined #ste||ar
<diehlpk_mobile[m>
Hkaiser on GitHub diehlpk/powertigee
<diehlpk_mobile[m>
Powertiger
<diehlpk_mobile[m>
Sorry I am ony phone and can not send you the link
<hkaiser>
ok, that's fine, thanks!
<diehlpk_mobile[m>
On what platform you like to build
<diehlpk_mobile[m>
There is no windows support
nk__ has joined #ste||ar
nikunj97 has quit [Ping timeout: 240 seconds]
khuck has joined #ste||ar
<khuck>
hkaiser: quick question, if you are free
<hkaiser>
khuck: sure
<hkaiser>
diehlpk_mobile[m: no worries, just some linux x64 system
<khuck>
we are creating in fastlane, not research.gov right?
<hkaiser>
khuck: uhh
<khuck>
I'm pretty sure so
<hkaiser>
khuck: here is the mail I received from our admin person: I have not received Fastlane Temporary Proposal numbers and Pins from Oregon University
<hkaiser>
thta's all I have ATM
<khuck>
right, I am trying to create it
<khuck>
but I am ignorant
<khuck>
no worries, I'll figure this out
<hkaiser>
ok, thanks - feel free to get in contact with Felisha, she is very helpful
<Yorlik>
Woops - I gave you the big file link - however - the #includes are now there, I think.
<Saswat85>
Hello. Wanted to ask about GSoC this year. Is there any first issue i could fix? I am experienced in Python, c/c++, Deep Learning and familiar to CUDA programming.
<hkaiser>
Saswat85: look at the tickets and/or the project ideas list