aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
diehlpk_mobile2 has joined #ste||ar
diehlpk_mobile has quit [Ping timeout: 276 seconds]
diehlpk_mobile2 has quit [Ping timeout: 246 seconds]
diehlpk_mobile has joined #ste||ar
diehlpk_mobile2 has joined #ste||ar
diehlpk_mobile has quit [Ping timeout: 264 seconds]
diehlpk_mobile has joined #ste||ar
diehlpk_mobile2 has quit [Read error: Connection reset by peer]
diehlpk_mobile has quit [Client Quit]
galab2 has joined #ste||ar
Smasher has quit [Remote host closed the connection]
galab2 has quit [Client Quit]
<zao> Not sure why there's less output in the Timeout scenario.
anushi has quit [Ping timeout: 276 seconds]
daissgr has joined #ste||ar
hkaiser has quit [Quit: bye]
mcopik_ has quit [Ping timeout: 264 seconds]
K-ballo has quit [Quit: K-ballo]
daissgr has quit [Ping timeout: 246 seconds]
anushi has joined #ste||ar
anushi has quit [Client Quit]
nanashi55 has quit [Ping timeout: 240 seconds]
jaafar_ has joined #ste||ar
nanashi55 has joined #ste||ar
Anushi1998 has quit [Remote host closed the connection]
anushi has joined #ste||ar
jaafar_ has quit [Quit: Konversation terminated!]
jaafar has joined #ste||ar
jaafar has quit [Ping timeout: 260 seconds]
Anushi1998 has joined #ste||ar
upgroovecoder has joined #ste||ar
<upgroovecoder> THIS IS A FREENODE BREAKING NEWS ALERT!! Hitechcg AND opal ARE GOING AT IT RIGHT NOW WITH A LOT OF FIGHTING AND ARGUING WOW YOU DON'T WANT TO MISS THIS!! TYPE /JOIN ## TO SEE THE ACTION...AGAIN TYPE /JOIN ## TO SEE THE ACTION!!
<upgroovecoder> Anushi1998 anushi nanashi55 parsa[w] simbergm zao_ heller_ verganz wash zao FjordPrefect RostamLog Guest48249 titzi_ itachi_uchiha_ Antrix[m] gentryx tushar1997 auviga wash[m] Zwei twwright quaz0r
upgroovecoder has quit [Client Quit]
sharonhsl has joined #ste||ar
Anushi1998 has quit [Remote host closed the connection]
david_pfander has joined #ste||ar
<github> [hpx] sithhell force-pushed readd_abort from 98653ad to 71bced1: https://git.io/vxL8D
<github> hpx/readd_abort 71bced1 Thomas Heller: Readding accidently removed std::abort...
david_pfander1 has joined #ste||ar
Anushi1998 has joined #ste||ar
<github> [hpx] StellarBot pushed 1 new commit to gh-pages: https://git.io/vxLBz
<github> hpx/gh-pages 7267f35 StellarBot: Updating docs
david_pfander1 has quit [Ping timeout: 240 seconds]
simbergm has quit [Ping timeout: 248 seconds]
nikunj_ has joined #ste||ar
sharonhsl has left #ste||ar [#ste||ar]
Anushi1998 has quit [Remote host closed the connection]
Anushi1998 has joined #ste||ar
jakub_golinowski has joined #ste||ar
K-ballo has joined #ste||ar
marco has joined #ste||ar
marco is now known as Guest17828
david_pfander1 has joined #ste||ar
david_pfander1 has quit [Remote host closed the connection]
titzi has joined #ste||ar
Anushi1998 has quit [Remote host closed the connection]
nanashi55 has quit [*.net *.split]
nanashi55 has joined #ste||ar
titzi_ has quit [Ping timeout: 260 seconds]
titzi has quit [Changing host]
titzi has joined #ste||ar
daissgr has joined #ste||ar
jaafar has joined #ste||ar
simbergm has joined #ste||ar
david_pfander1 has joined #ste||ar
jaafar has quit [Ping timeout: 264 seconds]
david_pfander1 has quit [Ping timeout: 252 seconds]
jakub_golinowski has quit [Quit: Ex-Chat]
hkaiser has joined #ste||ar
verganz has quit [Ping timeout: 260 seconds]
mcopik_ has joined #ste||ar
sharonlam has joined #ste||ar
sharonlam has left #ste||ar [#ste||ar]
sharonhsl has joined #ste||ar
jbjnr has joined #ste||ar
eschnett has joined #ste||ar
jbjnr has quit [Client Quit]
jbjnr has joined #ste||ar
aalekhn__ has joined #ste||ar
prashantjha has joined #ste||ar
prashantjha has quit [Client Quit]
CaptainRubik has joined #ste||ar
Anushi1998 has joined #ste||ar
hkaiser has quit [Quit: bye]
diehlpk_work has joined #ste||ar
<diehlpk_work> sharonhsl, Hi
<diehlpk_work> just read your e-mail and we have to discuss some things
<CaptainRubik> diehlok_work : Hi, I read that you said GSoC is not for people with internships. I would really like to contribute to hpx and if I have enough time from my internship would my proposal still be turned down based on my internship? :(
<CaptainRubik> diehlpk_work *
<diehlpk_work> CaptainRubik, You shoulkd work 30+ hours per week
<diehlpk_work> And internship is around 40 hours per week
<diehlpk_work> So 70 hours per week are quite intense
<diehlpk_work> We had the experience that students mostly overestimate the workload and failed the first evaluation
<CaptainRubik> Well, I have Sat and Sun off. And given that the work would be similar. It shouldn't be a problem. I will include this in the proposal
<diehlpk_work> How do you like to work 15 hours a day?
<diehlpk_work> Let me shorten my answers, we like all our students to work full-time on gsoc
<CaptainRubik> I don't have any problem but the timeline is so designed that I don't have to work for 15 hours each day. Thanks for the reply though. :)
<diehlpk_work> Having exams and other duties is ok, but you should be able to work most of the three months in full-time for the project
aserio has joined #ste||ar
<CaptainRubik> Thanks.
<diehlpk_work> You are of course welcome to contribute without GSoC
<sharonhsl> hi diehlpk_work, yea sure
<diehlpk_work> CaptainRubik, I did not said that, I refereed to the guidelines from google
<diehlpk_work> From my experience and reading the gsoc mentor mailing list, most orgs do not recommend to do an internship and gsoc.
<CaptainRubik> Sure, I will try to make my case in the proposal hoping for acceptance. :)
<diehlpk_work> sharonhsl, I do not know why you talk about a mesh in your e-mail?
CaptainRubik has quit [Quit: Page closed]
<sharonhsl> ohh I'm not sure how to represent the material, I thought mesh is a related idea
<diehlpk_work> No, this is a mesh less method
<diehlpk_work> And there a huge differences to mesh methods like finite elements
<diehlpk_work> Using a dual mesh is one possibility to place the nodes
<diehlpk_work> But you do not have to store edges like in a mesh
<diehlpk_work> You have these discret nodes instead of a mesh
<sharonhsl> so how do you save a broken bond?
<sharonhsl> I was thinking the edges in mesh are representing bonds
<diehlpk_work> Ok, a node do not only have bonds to the nearest neighbor
<diehlpk_work> it has bonds to much more
<sharonhsl> within the interactive zone?
<diehlpk_work> Considering only the nearest neighbor you just could use finite elements
<diehlpk_work> Yes
<diehlpk_work> Forget about the edges in the dual mesh, this is just to generate the geometry
<sharonhsl> here's the thing I don't understand, what is the formal definition of a crack?
<diehlpk_work> There is none
<diehlpk_work> People have to define it
<diehlpk_work> We only have damage and no cracks
<sharonhsl> ohhhhhh
<diehlpk_work> Damage is number of actual bounds / initioal bonds
<diehlpk_work> So you just have these damage information
prashantjha has joined #ste||ar
<diehlpk_work> Extracting cracks is a different field
<sharonhsl> and a damage occurs when a bond breaks?
<diehlpk_work> There is one paper with Michael and there we tried to ectract the crack surface
<diehlpk_work> No, we just say that the bond between two nodes is broken and no forces are exchanged
<diehlpk_work> Damage is always a value per node
<prashantjha> breaking of a bond is sign of damage. Damage at node is computed by looking at the ratio of number of bonds which got broken and total number of bonds of a mesh node.
<sharonhsl> in your paper you show diagrams of cracks growth, like the Stanford rabbit. the white spaces (I assume they represent our visual perception of cracks) appears because the nodes move by pair-wise forces?
<diehlpk_work> sharonhsl, Which paper?
mbremer has joined #ste||ar
<sharonhsl> I only read one of your papers, it's called Modelling and Simulation of cracks and fractures with peridynamics in brittle materials(2017)
<diehlpk_work> Oh, this is my thesis
<diehlpk_work> Which Chapter or Section?
<sharonhsl> section 4.3
<diehlpk_work> Ok, these are advanced visualization techniques
<diehlpk_work> To try to get more details about the crack
<diehlpk_work> and its surface
<sharonhsl> ahaa, so to put it simply, this project is tracking movements of discrete nodes governed by the peridynamic equation of motion in a vector space?
<diehlpk_work> Yes
<diehlpk_work> No, visualization like in this section
<sharonhsl> as for the coordinates, is it good to use a fixed grid?
<diehlpk_work> I was thinking that some one provides you with a csv file with x,y,z,volume, density and you read this file
<sharonhsl> so visualization is also included in the project?
<diehlpk_work> No
<diehlpk_work> You should only implement the tracking of discrete nodes on distributed memory with node balancing
<sharonhsl> great, it's much clearer now. It's funny it sounds super complicated that I kept losing focus at first but now the goal is very clear
<diehlpk_work> Ok
<sharonhsl> also in the email I mentioned about message-driven programming in hpx, it's a feature stated in hpx website - http://stellar.cct.lsu.edu/files/hpx_0.9.6/html/hpx/tutorial/intro/principles.html . what exactly is this?
<prashantjha> sharonhs1, in your email you talk about ghost grid idea
<sharonhsl> yea
<prashantjha> could you explain it here for the benefit of others
<zao> heh
<sharonhsl> sure
K-ballo has quit [Ping timeout: 264 seconds]
<sharonhsl> in a lot of problems consist of a structured grid of points that needs to be updated, such as referencing neighboring cells to obtain a value
<sharonhsl> we usually want to do it in distributed memory, so we don't need one gigantic blob of RAM
<sharonhsl> so we divide the grid in separate localities to do computation
<sharonhsl> btw it's from the paper Ghost Cell Pattern by Fredrik Berg Kjolstad
<sharonhsl> when we need to do computation on edge cells, we need the values of its neighboring cells in other localities
<prashantjha> yes
hkaiser has joined #ste||ar
<sharonhsl> so the author proposed the idea of ghost cells, which is some additional empty cells wrapping around a sub-grid
<sharonhsl> at each iteration, he uses MPI to replicate the sides of other devices
K-ballo has joined #ste||ar
<sharonhsl> that's pretty much it
<prashantjha> ok. This idea sounds good for our problem.
<prashantjha> we may have mesh where mesh nodes are not located in lattice fashion
<prashantjha> will it make implementation difficult?
<sharonhsl> so I think this idea is applicable to our problem because even though the nodes may not align perfectly, but nodes have volumes that they cannot overlap, so within the interactive zone there is a limit to the number of nodes. so there's no huge worry on whether a locality will handle unexpected amount of nodes
<sharonhsl> the problem is more on load balancing, how to distribute the domain in a very concentrated locality
<prashantjha> right
<sharonhsl> for that I still haven't thought about it
<prashantjha> diehlpk_work, what do you think about ghost cell idea?
<diehlpk_work> It works for structured grids, but what happens with a wired geometry?
<sharonhsl> you mean like a hyperbolic one?
<diehlpk_work> Imagine a disk
<diehlpk_work> with 8 domains
<diehlpk_work> So we could have most points in the middle
<diehlpk_work> and less points in outer domains
<diehlpk_work> So how does the ghost cell method do load balancing here?
<diehlpk_work> And how do you replace the MPI specific stuff with HPX in this method?
<sharonhsl> does less points in outer domains necessarily means less concentrated nodes?
<diehlpk_work> For a circle or a sphere yes?
<diehlpk_work> When you have the same nodal spacing
<diehlpk_work> So these nodes whould have to do less work
<sharonhsl> I don't quite understand, I'm thinking that initially all nodes are distributed evenly in a disk/sphere
<sharonhsl> you mean by less points does it mean that the grid is looser towards the outer edge?
Anushi1998 has quit [Quit: Leaving]
<diehlpk_work> sharonhsl, Draw a circle on a sheet of paper
<diehlpk_work> Divide it with your method into sudomains for four nodes
aserio1 has joined #ste||ar
<diehlpk_work> After that refine it to 16 nodes
<diehlpk_work> So some nodes close to the edge have less nodes to track
<diehlpk_work> Which means less computaional work
<diehlpk_work> How would you distribute the computaional work between them?
eschnett has quit [Quit: eschnett]
aserio has quit [Ping timeout: 245 seconds]
aserio1 is now known as aserio
<sharonhsl> I made an online whiteboard, do you mind clicking this link? https://awwapp.com/b/u3zhhelv3/
<diehlpk_work> Ok
<sharonhsl> If my drawing is what you're thinking, then I get what you mean
<diehlpk_work> No, I will draw a new one
<sharonhsl> ok
<mbremer> Hi, I'm having issues with the resource partitioner.
<mbremer> I'm running on a skylake node with 2 sockets with 24 cores per socket.
<zao> Does that include hyperthreads?
<mbremer> No it does not
<mbremer> Hyperthreads count?
<heller_> yes
<mbremer> kk,
<mbremer> Thanks! Let me rebuild then.
<sharonhsl> diehlpk_work, prashantjha: thanks a lot for the discussion. I should sleep now, will get back with new ideas tmr
<diehlpk_work> You are welcome
<prashantjha> you are welcome.
sharonhsl has left #ste||ar [#ste||ar]
eschnett has joined #ste||ar
jaafar has joined #ste||ar
prashantjha has quit [Ping timeout: 245 seconds]
<mbremer> zao, @heller_: That did the trick. Thanks
<zao> Yay.
<mbremer> I have two follow up questions. Does it actually matter what max cpu count gets set to? Or does it just need to be greater than the numbe cores w/ hyperthreadign?
<mbremer> And secondly, and more unrelated, is there any advantage to using numactl for distributed hpx jobs? I figure since hpx uses hwloc, it would already have those benefits baked in
<zao> I believe it's that way because you need something fancier than an uint64_t to keep track of your masks.
<zao> I'm not familiar with the particulars for HPX, but it tends to be the case for most software.
EverYoung has joined #ste||ar
prashantjha has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
mcopik has joined #ste||ar
mcopik has quit [Quit: Leaving]
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
aserio has quit [Ping timeout: 265 seconds]
Anushi1998 has joined #ste||ar
Anushi1998 has quit [Read error: Connection reset by peer]
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
<hkaiser> heller_: nice
david_pfander has quit [Ping timeout: 268 seconds]
EverYoung has quit [Remote host closed the connection]
galabc has joined #ste||ar
EverYoung has joined #ste||ar
Smasher has joined #ste||ar
EverYoung has quit [Ping timeout: 252 seconds]
EverYoung has joined #ste||ar
Anushi1998 has joined #ste||ar
Anushi1998 has quit [Remote host closed the connection]
Anushi1998 has joined #ste||ar
aserio has joined #ste||ar
prashantjha has quit [Ping timeout: 264 seconds]
aalekhn__ has quit [Quit: Connection closed for inactivity]
prashantjha has joined #ste||ar
<heller_> hkaiser: indeed, but not what I wanted to post ;)
<K-ballo> ha, I was wondering about that
<K-ballo> the recurring link paste
<hkaiser> heller_: ok, so what's new in that library?
<heller_> It's closer to our implementation and we could use it for our arm context switches
<hkaiser> heller_: is it better than the boost.context implementation?
<heller_> I'm going to setup arm pycicle builders soon (got 17 arm64 boards now)
<heller_> It looks leaner
jakub_golinowski has joined #ste||ar
<hkaiser> ok
<hkaiser> not sure how something can be 'leaner' if you have to obey the rules of the ABI
<diehlpk_work> heller_, hkaiser We would be guest authors for this heise article and they do not write the article for us
galabc has quit [Ping timeout: 264 seconds]
<heller_> I'm mostly interested in just using the assembly for the switch... Not the full library
<hkaiser> what is more in the library than the assembly?
<diehlpk_work> They just approve our content, like for a paper submission
<hkaiser> diehlpk_work: sure, but why the pressure?
<diehlpk_work> Because we will never do it when we postbone it :)
<heller_> Will be hard to write about yourself without self promotion
<hkaiser> heller_: what is more in the library than the assembly?
<diehlpk_work> heller_, Selfpromotion is ok, but not solely
<hkaiser> diehlpk_work: why would we want to write this if not for self-promotion?
eschnett has quit [Quit: eschnett]
eschnett has joined #ste||ar
<diehlpk_work> hkaiser, Sure, we can have self-promotion, but should deliver some content
<hkaiser> ;)
<diehlpk_work> Use a broader field like Science and Open Source - How the could benefit from each other
<diehlpk_work> The example is HPX
<diehlpk_work> So we are fine
<diehlpk_work> They do not want a pure article about HPX, because this could be seen as advertisement
<diehlpk_work> My plan was to discuss the outline with them and after that we take more time to write
<hkaiser> k
<diehlpk_work> Once we have the scope of the article defined, we can say we need more time to produce content
<diehlpk_work> For sure, they will have some concerns and we have to chnage the scope
<diehlpk_work> I was thinking that the two interviews about GSoC and this article are cool advertisement for HPX in German-speaking part of Europe
anushi_ has joined #ste||ar
Anushi1998 has quit [Ping timeout: 256 seconds]
jakub_golinowski has quit [Remote host closed the connection]
aserio has quit [Ping timeout: 276 seconds]
eschnett has quit [Quit: eschnett]
aserio has joined #ste||ar
prashantjha has quit [Ping timeout: 276 seconds]
<heller_> K-ballo: the atomic test is failing for me when cross compiling for a KNL, any ideas?
<nikunj_> @hkaiser: quick question, does hpx::finalize() exit every process required to shutdown the hpx runtime?
<K-ballo> heller_: the atomic feature test? it appears to be failing in circle-ci too, recently
<nikunj_> or is there anything that I'll need to shutdown myself?
<K-ballo> there was a PR to rewrite it, some months ago
<heller_> K-ballo: ok, the quick fix is to define the cmake variable HPX_HAVE_LIBATOMIC
<heller_> but that's not nice...
<K-ballo> yeah, that's what broken
<K-ballo> last I touched it it was working fine, it was just not explicitly testing for the various sizes
<heller_> yeah..
<heller_> wonder if there is a more general approach to it
hkaiser has quit [Quit: bye]
<K-ballo> it seems it was reverted? :|
hkaiser has joined #ste||ar
hkaiser has quit [Client Quit]
eschnett has joined #ste||ar
sudhir has joined #ste||ar
<sudhir> join
sudhir is now known as Guest14274
kisaacs has joined #ste||ar
katywilliams has joined #ste||ar
<zao> Hello, world!
alok58451 has joined #ste||ar
alok58451 has quit [Client Quit]
Guest14274 has quit [Quit: Page closed]
nanashi64 has joined #ste||ar
nanashi55 has quit [Ping timeout: 276 seconds]
nanashi64 is now known as nanashi55
EverYoun_ has joined #ste||ar
EverYoung has quit [Ping timeout: 245 seconds]
EverYoun_ has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
eschnett has quit [Quit: eschnett]
Smasher has quit [Remote host closed the connection]
Smasher has joined #ste||ar
eschnett has joined #ste||ar
<K-ballo> we ought to print have-libatomic during configure
<katywilliams> Any help? Before the meeting, I pulled and built hpx [commit: e8ce5c3] on rostam, then I tried to build phylanx [commit: 76893fd] on top of it and I got a use of undeclared identifier 'HWLOC_MEMBIND_REPLICATE'
<katywilliams> included from thread_executer.hpp
<zao> I wonder, if I build a HPX on one x64 arch, will it be usable out of the box on another x64 arch?
<zao> (assuming Clang/GCC)
<K-ballo> katywilliams: sounds like that issue with hwloc 2.0 that's been going around lately
aserio has quit [Ping timeout: 252 seconds]
<K-ballo> how old is e8ce5c3?
<K-ballo> Feb 8, before the fix
<K-ballo> katywilliams: you are not up to date
<zao> Date: Thu Feb 8 15:49:04 2018 +0100
aserio has joined #ste||ar
<zao> Who maintains SLURM at the LSU resources?
eschnett has quit [Quit: eschnett]
eschnett has joined #ste||ar
hkaiser has joined #ste||ar
galabc has joined #ste||ar
katywilliams has quit [Ping timeout: 260 seconds]
galabc has quit [Ping timeout: 264 seconds]
galabc has joined #ste||ar
katywilliams has joined #ste||ar
<aserio> zao: that would be akheir
aserio has quit [Quit: aserio]
galabc has quit [Read error: Connection reset by peer]
diehlpk_work has quit [Quit: Leaving]
katywilliams has quit [Ping timeout: 246 seconds]
Anushi1998 has joined #ste||ar
anushi has quit [Ping timeout: 276 seconds]
EverYoun_ has joined #ste||ar
anushi_ has quit [Ping timeout: 276 seconds]
EverYoung has quit [Ping timeout: 245 seconds]
EverYoun_ has quit [Ping timeout: 252 seconds]
anushi has joined #ste||ar
nikunj_ has quit [Quit: Page closed]
kisaacs has quit [Ping timeout: 260 seconds]
eschnett has quit [Quit: eschnett]
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
Smasher has quit [Remote host closed the connection]