aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
diehlpk_mobile2 has joined #ste||ar
diehlpk_mobile has quit [Ping timeout: 276 seconds]
diehlpk_mobile2 has quit [Ping timeout: 246 seconds]
diehlpk_mobile has joined #ste||ar
diehlpk_mobile2 has joined #ste||ar
diehlpk_mobile has quit [Ping timeout: 264 seconds]
diehlpk_mobile has joined #ste||ar
diehlpk_mobile2 has quit [Read error: Connection reset by peer]
diehlpk_mobile has quit [Client Quit]
galab2 has joined #ste||ar
Smasher has quit [Remote host closed the connection]
<zao>
Not sure why there's less output in the Timeout scenario.
anushi has quit [Ping timeout: 276 seconds]
daissgr has joined #ste||ar
hkaiser has quit [Quit: bye]
mcopik_ has quit [Ping timeout: 264 seconds]
K-ballo has quit [Quit: K-ballo]
daissgr has quit [Ping timeout: 246 seconds]
anushi has joined #ste||ar
anushi has quit [Client Quit]
nanashi55 has quit [Ping timeout: 240 seconds]
jaafar_ has joined #ste||ar
nanashi55 has joined #ste||ar
Anushi1998 has quit [Remote host closed the connection]
anushi has joined #ste||ar
jaafar_ has quit [Quit: Konversation terminated!]
jaafar has joined #ste||ar
jaafar has quit [Ping timeout: 260 seconds]
Anushi1998 has joined #ste||ar
upgroovecoder has joined #ste||ar
<upgroovecoder>
THIS IS A FREENODE BREAKING NEWS ALERT!! Hitechcg AND opal ARE GOING AT IT RIGHT NOW WITH A LOT OF FIGHTING AND ARGUING WOW YOU DON'T WANT TO MISS THIS!! TYPE /JOIN ## TO SEE THE ACTION...AGAIN TYPE /JOIN ## TO SEE THE ACTION!!
david_pfander1 has quit [Ping timeout: 240 seconds]
simbergm has quit [Ping timeout: 248 seconds]
nikunj_ has joined #ste||ar
sharonhsl has left #ste||ar [#ste||ar]
Anushi1998 has quit [Remote host closed the connection]
Anushi1998 has joined #ste||ar
jakub_golinowski has joined #ste||ar
K-ballo has joined #ste||ar
marco has joined #ste||ar
marco is now known as Guest17828
david_pfander1 has joined #ste||ar
david_pfander1 has quit [Remote host closed the connection]
titzi has joined #ste||ar
Anushi1998 has quit [Remote host closed the connection]
nanashi55 has quit [*.net *.split]
nanashi55 has joined #ste||ar
titzi_ has quit [Ping timeout: 260 seconds]
titzi has quit [Changing host]
titzi has joined #ste||ar
daissgr has joined #ste||ar
jaafar has joined #ste||ar
simbergm has joined #ste||ar
david_pfander1 has joined #ste||ar
jaafar has quit [Ping timeout: 264 seconds]
david_pfander1 has quit [Ping timeout: 252 seconds]
jakub_golinowski has quit [Quit: Ex-Chat]
hkaiser has joined #ste||ar
verganz has quit [Ping timeout: 260 seconds]
mcopik_ has joined #ste||ar
sharonlam has joined #ste||ar
sharonlam has left #ste||ar [#ste||ar]
sharonhsl has joined #ste||ar
jbjnr has joined #ste||ar
eschnett has joined #ste||ar
jbjnr has quit [Client Quit]
jbjnr has joined #ste||ar
aalekhn__ has joined #ste||ar
prashantjha has joined #ste||ar
prashantjha has quit [Client Quit]
CaptainRubik has joined #ste||ar
Anushi1998 has joined #ste||ar
hkaiser has quit [Quit: bye]
diehlpk_work has joined #ste||ar
<diehlpk_work>
sharonhsl, Hi
<diehlpk_work>
just read your e-mail and we have to discuss some things
<CaptainRubik>
diehlok_work : Hi, I read that you said GSoC is not for people with internships. I would really like to contribute to hpx and if I have enough time from my internship would my proposal still be turned down based on my internship? :(
<CaptainRubik>
diehlpk_work *
<diehlpk_work>
CaptainRubik, You shoulkd work 30+ hours per week
<diehlpk_work>
And internship is around 40 hours per week
<diehlpk_work>
So 70 hours per week are quite intense
<diehlpk_work>
We had the experience that students mostly overestimate the workload and failed the first evaluation
<CaptainRubik>
Well, I have Sat and Sun off. And given that the work would be similar. It shouldn't be a problem. I will include this in the proposal
<diehlpk_work>
How do you like to work 15 hours a day?
<diehlpk_work>
Let me shorten my answers, we like all our students to work full-time on gsoc
<CaptainRubik>
I don't have any problem but the timeline is so designed that I don't have to work for 15 hours each day. Thanks for the reply though. :)
<diehlpk_work>
Having exams and other duties is ok, but you should be able to work most of the three months in full-time for the project
aserio has joined #ste||ar
<CaptainRubik>
Thanks.
<diehlpk_work>
You are of course welcome to contribute without GSoC
<sharonhsl>
hi diehlpk_work, yea sure
<diehlpk_work>
CaptainRubik, I did not said that, I refereed to the guidelines from google
<diehlpk_work>
From my experience and reading the gsoc mentor mailing list, most orgs do not recommend to do an internship and gsoc.
<CaptainRubik>
Sure, I will try to make my case in the proposal hoping for acceptance. :)
<diehlpk_work>
sharonhsl, I do not know why you talk about a mesh in your e-mail?
CaptainRubik has quit [Quit: Page closed]
<sharonhsl>
ohh I'm not sure how to represent the material, I thought mesh is a related idea
<diehlpk_work>
No, this is a mesh less method
<diehlpk_work>
And there a huge differences to mesh methods like finite elements
<diehlpk_work>
Using a dual mesh is one possibility to place the nodes
<diehlpk_work>
But you do not have to store edges like in a mesh
<diehlpk_work>
You have these discret nodes instead of a mesh
<sharonhsl>
so how do you save a broken bond?
<sharonhsl>
I was thinking the edges in mesh are representing bonds
<diehlpk_work>
Ok, a node do not only have bonds to the nearest neighbor
<diehlpk_work>
it has bonds to much more
<sharonhsl>
within the interactive zone?
<diehlpk_work>
Considering only the nearest neighbor you just could use finite elements
<diehlpk_work>
Yes
<diehlpk_work>
Forget about the edges in the dual mesh, this is just to generate the geometry
<sharonhsl>
here's the thing I don't understand, what is the formal definition of a crack?
<diehlpk_work>
There is none
<diehlpk_work>
People have to define it
<diehlpk_work>
We only have damage and no cracks
<sharonhsl>
ohhhhhh
<diehlpk_work>
Damage is number of actual bounds / initioal bonds
<diehlpk_work>
So you just have these damage information
prashantjha has joined #ste||ar
<diehlpk_work>
Extracting cracks is a different field
<sharonhsl>
and a damage occurs when a bond breaks?
<diehlpk_work>
There is one paper with Michael and there we tried to ectract the crack surface
<diehlpk_work>
No, we just say that the bond between two nodes is broken and no forces are exchanged
<diehlpk_work>
Damage is always a value per node
<prashantjha>
breaking of a bond is sign of damage. Damage at node is computed by looking at the ratio of number of bonds which got broken and total number of bonds of a mesh node.
<sharonhsl>
in your paper you show diagrams of cracks growth, like the Stanford rabbit. the white spaces (I assume they represent our visual perception of cracks) appears because the nodes move by pair-wise forces?
<diehlpk_work>
sharonhsl, Which paper?
mbremer has joined #ste||ar
<sharonhsl>
I only read one of your papers, it's called Modelling and Simulation of cracks and fractures with peridynamics in brittle materials(2017)
<diehlpk_work>
Oh, this is my thesis
<diehlpk_work>
Which Chapter or Section?
<sharonhsl>
section 4.3
<diehlpk_work>
Ok, these are advanced visualization techniques
<diehlpk_work>
To try to get more details about the crack
<diehlpk_work>
and its surface
<sharonhsl>
ahaa, so to put it simply, this project is tracking movements of discrete nodes governed by the peridynamic equation of motion in a vector space?
<diehlpk_work>
Yes
<diehlpk_work>
No, visualization like in this section
<sharonhsl>
as for the coordinates, is it good to use a fixed grid?
<diehlpk_work>
I was thinking that some one provides you with a csv file with x,y,z,volume, density and you read this file
<sharonhsl>
so visualization is also included in the project?
<diehlpk_work>
No
<diehlpk_work>
You should only implement the tracking of discrete nodes on distributed memory with node balancing
<sharonhsl>
great, it's much clearer now. It's funny it sounds super complicated that I kept losing focus at first but now the goal is very clear
<prashantjha>
sharonhs1, in your email you talk about ghost grid idea
<sharonhsl>
yea
<prashantjha>
could you explain it here for the benefit of others
<zao>
heh
<sharonhsl>
sure
K-ballo has quit [Ping timeout: 264 seconds]
<sharonhsl>
in a lot of problems consist of a structured grid of points that needs to be updated, such as referencing neighboring cells to obtain a value
<sharonhsl>
we usually want to do it in distributed memory, so we don't need one gigantic blob of RAM
<sharonhsl>
so we divide the grid in separate localities to do computation
<sharonhsl>
btw it's from the paper Ghost Cell Pattern by Fredrik Berg Kjolstad
<sharonhsl>
when we need to do computation on edge cells, we need the values of its neighboring cells in other localities
<prashantjha>
yes
hkaiser has joined #ste||ar
<sharonhsl>
so the author proposed the idea of ghost cells, which is some additional empty cells wrapping around a sub-grid
<sharonhsl>
at each iteration, he uses MPI to replicate the sides of other devices
K-ballo has joined #ste||ar
<sharonhsl>
that's pretty much it
<prashantjha>
ok. This idea sounds good for our problem.
<prashantjha>
we may have mesh where mesh nodes are not located in lattice fashion
<prashantjha>
will it make implementation difficult?
<sharonhsl>
so I think this idea is applicable to our problem because even though the nodes may not align perfectly, but nodes have volumes that they cannot overlap, so within the interactive zone there is a limit to the number of nodes. so there's no huge worry on whether a locality will handle unexpected amount of nodes
<sharonhsl>
the problem is more on load balancing, how to distribute the domain in a very concentrated locality
<prashantjha>
right
<sharonhsl>
for that I still haven't thought about it
<prashantjha>
diehlpk_work, what do you think about ghost cell idea?
<diehlpk_work>
It works for structured grids, but what happens with a wired geometry?
<sharonhsl>
you mean like a hyperbolic one?
<diehlpk_work>
Imagine a disk
<diehlpk_work>
with 8 domains
<diehlpk_work>
So we could have most points in the middle
<diehlpk_work>
and less points in outer domains
<diehlpk_work>
So how does the ghost cell method do load balancing here?
<diehlpk_work>
And how do you replace the MPI specific stuff with HPX in this method?
<sharonhsl>
does less points in outer domains necessarily means less concentrated nodes?
<diehlpk_work>
For a circle or a sphere yes?
<diehlpk_work>
When you have the same nodal spacing
<diehlpk_work>
So these nodes whould have to do less work
<sharonhsl>
I don't quite understand, I'm thinking that initially all nodes are distributed evenly in a disk/sphere
<sharonhsl>
you mean by less points does it mean that the grid is looser towards the outer edge?
Anushi1998 has quit [Quit: Leaving]
<diehlpk_work>
sharonhsl, Draw a circle on a sheet of paper
<diehlpk_work>
Divide it with your method into sudomains for four nodes
aserio1 has joined #ste||ar
<diehlpk_work>
After that refine it to 16 nodes
<diehlpk_work>
So some nodes close to the edge have less nodes to track
<diehlpk_work>
Which means less computaional work
<diehlpk_work>
How would you distribute the computaional work between them?
<mbremer>
I'm running on a skylake node with 2 sockets with 24 cores per socket.
<zao>
Does that include hyperthreads?
<mbremer>
No it does not
<mbremer>
Hyperthreads count?
<heller_>
yes
<mbremer>
kk,
<mbremer>
Thanks! Let me rebuild then.
<sharonhsl>
diehlpk_work, prashantjha: thanks a lot for the discussion. I should sleep now, will get back with new ideas tmr
<diehlpk_work>
You are welcome
<prashantjha>
you are welcome.
sharonhsl has left #ste||ar [#ste||ar]
eschnett has joined #ste||ar
jaafar has joined #ste||ar
prashantjha has quit [Ping timeout: 245 seconds]
<mbremer>
zao, @heller_: That did the trick. Thanks
<zao>
Yay.
<mbremer>
I have two follow up questions. Does it actually matter what max cpu count gets set to? Or does it just need to be greater than the numbe cores w/ hyperthreadign?
<mbremer>
And secondly, and more unrelated, is there any advantage to using numactl for distributed hpx jobs? I figure since hpx uses hwloc, it would already have those benefits baked in
<zao>
I believe it's that way because you need something fancier than an uint64_t to keep track of your masks.
<zao>
I'm not familiar with the particulars for HPX, but it tends to be the case for most software.
EverYoung has joined #ste||ar
prashantjha has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
mcopik has joined #ste||ar
mcopik has quit [Quit: Leaving]
EverYoung has quit [Remote host closed the connection]
EverYoun_ has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
eschnett has quit [Quit: eschnett]
Smasher has quit [Remote host closed the connection]
Smasher has joined #ste||ar
eschnett has joined #ste||ar
<K-ballo>
we ought to print have-libatomic during configure
<katywilliams>
Any help? Before the meeting, I pulled and built hpx [commit: e8ce5c3] on rostam, then I tried to build phylanx [commit: 76893fd] on top of it and I got a use of undeclared identifier 'HWLOC_MEMBIND_REPLICATE'
<katywilliams>
included from thread_executer.hpp
<zao>
I wonder, if I build a HPX on one x64 arch, will it be usable out of the box on another x64 arch?
<zao>
(assuming Clang/GCC)
<K-ballo>
katywilliams: sounds like that issue with hwloc 2.0 that's been going around lately