aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
kisaacs has quit [Ping timeout: 240 seconds]
<github> [hpx] hkaiser pushed 1 new commit to support_flecsi: https://git.io/vAlLg
<github> hpx/support_flecsi d0282df Hartmut Kaiser: Making sure resume/suspend is not called for schedulers that don't support it...
EverYoung has joined #ste||ar
kisaacs has joined #ste||ar
EverYoung has quit [Ping timeout: 260 seconds]
EverYoung has joined #ste||ar
jaafar has joined #ste||ar
vamatya has quit [Ping timeout: 276 seconds]
EverYoung has quit [Remote host closed the connection]
parsa has quit [Quit: Zzzzzzzzzzzz]
hkaiser has quit [Quit: bye]
diehlpk has joined #ste||ar
<diehlpk> Notification of Acceptance Monday, March 5, 2018
<diehlpk> ISC has a new deadline
kisaacs has quit [Ping timeout: 256 seconds]
kisaacs has joined #ste||ar
kisaacs has quit [Ping timeout: 268 seconds]
kisaacs has joined #ste||ar
hkaiser has joined #ste||ar
hkaiser has quit [Client Quit]
Nikunj has joined #ste||ar
vamatya has joined #ste||ar
anushi has joined #ste||ar
nanashi64 has joined #ste||ar
nanashi55 has quit [Ping timeout: 260 seconds]
nanashi64 is now known as nanashi55
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
EverYoung has quit [Read error: Connection reset by peer]
Nikunj has quit [Ping timeout: 260 seconds]
kisaacs has quit [Ping timeout: 248 seconds]
kisaacs has joined #ste||ar
kisaacs has quit [Ping timeout: 248 seconds]
parsa has joined #ste||ar
kisaacs has joined #ste||ar
kisaacs has quit [Ping timeout: 264 seconds]
simbergm_ has quit [Ping timeout: 255 seconds]
simbergm_ has joined #ste||ar
vamatya has quit [Quit: Leaving]
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
parsa has quit [Client Quit]
kisaacs has joined #ste||ar
kisaacs has quit [Ping timeout: 248 seconds]
jaafar has quit [Ping timeout: 276 seconds]
parsa has joined #ste||ar
itachi_uchiha_ has joined #ste||ar
Antrix[m] has joined #ste||ar
nanashi55 has quit [Ping timeout: 256 seconds]
nanashi55 has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
CaptainRubik has joined #ste||ar
CaptainRubik has quit [Client Quit]
mcopik has joined #ste||ar
anushi has quit [Remote host closed the connection]
anushi has joined #ste||ar
anushi has quit [Remote host closed the connection]
Rakesh-Senwar has joined #ste||ar
anushi has joined #ste||ar
Rakesh-Senwar has quit [Quit: Page closed]
hkaiser has joined #ste||ar
<github> [hpx] hkaiser force-pushed support_flecsi from d0282df to b81a510: https://git.io/vA8kG
<github> hpx/support_flecsi b81a510 Hartmut Kaiser: Making sure resume/suspend is not called for schedulers that don't support it...
<github> [hpx] StellarBot pushed 1 new commit to gh-pages: https://git.io/vA8I3
<github> hpx/gh-pages 46724e1 StellarBot: Updating docs
hkaiser[m] has joined #ste||ar
hkaiser[m] has quit [Remote host closed the connection]
<Antrix[m]> Hey, hkaiser I want to get started on the "Script Language Bindings" project. Could you point me towards an issue to help me get started ? Also, is IRC the preferred communication channel here or mailling list ?
diehlpk has quit [Ping timeout: 240 seconds]
<hkaiser> Antrix[m]: irc is the preferred channel, yes
<hkaiser> Antrix[m]: we don't have a ticket for the bindings
<hkaiser> Antrix[m]: we discussed bindings for two possible languages: Lua and Python
<hkaiser> any preference?
<hkaiser> (the work for lua has progressed somewhat already, but might need changes
<hkaiser> Antrix[m]: the existing work can be found here: https://github.com/STEllAR-GROUP/hpx_script
<hkaiser> was not touched for quite some time, though
<Antrix[m]> hkaiser: I would prefer python
<hkaiser> Antrix[m]: perfect
<hkaiser> I'd prefer if you focussed on that too ;)
<hkaiser> how well do you know Python ?
<Antrix[m]> I have done many course projects in python, so I dont think my python proficiency should be an issue
eschnett has quit [Quit: eschnett]
<hkaiser> Antrix[m]: what do you know about the inner workings of Python?
<hkaiser> Antrix[m]: HPX is heavily multi-threaded, so integrating it with Python will require some knowledge of how this can be done
<Antrix[m]> hkaiser: I have done a lot of muti-threaded programming in c++, using mpi, openmp, cuda etc but I have not delved into multithreading in python
<Antrix[m]> I would like to get started on that.
apsknight has joined #ste||ar
kisaacs has joined #ste||ar
apsknight has quit [Quit: apsknight]
kisaacs has quit [Ping timeout: 256 seconds]
mcopik has quit [Ping timeout: 240 seconds]
nanashi64 has joined #ste||ar
nanashi55 has quit [Ping timeout: 260 seconds]
nanashi64 is now known as nanashi55
nanashi64 has joined #ste||ar
nanashi55 has quit [Ping timeout: 264 seconds]
nanashi64 is now known as nanashi55
Nikunj has joined #ste||ar
<Nikunj> @hkaiser: I wish to get myself started on the "A C++ Runtime Replacement" project. Could you please provide me a starting point to begin my in-depth analysis for the project?
<K-ballo> that one sounds big.. Nikunj have a link to it?
<Nikunj> @K-ballo: I didn't get you. Link as in should I provide you with a link to that project?
K-ballo1 has joined #ste||ar
<Nikunj> @K-ballo: Going through the GSoc Project list, I found it to be most interesting. I will certainly have to clear up my OS concepts but I feel that it will be really good experience trying to work on something like this.
K-ballo has quit [Ping timeout: 240 seconds]
K-ballo1 is now known as K-ballo
<K-ballo> sorry, I got disconnected, missed the link
<K-ballo> yes, thank you
simbergm_ has quit [Ping timeout: 240 seconds]
simbergm_ has joined #ste||ar
<hkaiser> Antrix[m]: cool
<hkaiser> Nikunj: that one is HUGE
<jbjnr> hkaiser: heller_ I sent an email a few days ago to the main dev of the libcds project (Concurrent data structures) and asked about relicensing libcds with the boost license so that we could include all/part of it in hpx.
<jbjnr> his reply was - yes, certainly - he'd love to help us out
<hkaiser> jbjnr: any responses?
<jbjnr> This is fantastic
<hkaiser> marvelous!
<hkaiser> so the GSoC project would turn into an integration task?
<jbjnr> so I propose that our gsoc project be to consolidate libcds into hpx as a 'subproject; within an hpx::concurrent naespace if that is feasible
<jbjnr> ^^yes
<hkaiser> nice
<hkaiser> there was a student being interested
<jbjnr> yes. that's what made me ask
<jbjnr> no point reinventing the wheel
<hkaiser> sounds much better than to reinvent everything ourselves
<jbjnr> and so using libcds would be just perfect
<hkaiser> nod, I agree
<hkaiser> have a link to libcds?
<hkaiser> how portable is it?
<jbjnr> that will be our gsoc task I guess - to find out how much we can reuse without having to redo everything.
<Antrix[m]> hkaiser: I have never made wrappers in python. I think this project needs that
<jbjnr> our mutexes, spinlocks, futures etc need to be integrated in
<hkaiser> Antrix[m]: yes - you might want to look into pybind11 for that
<hkaiser> jbjnr: makes sense
diehlpk has joined #ste||ar
<hkaiser> jbjnr: have a link for the libcds you refer to (there are several)
eschnett has joined #ste||ar
<Nikunj> @hkaiser: I did notice that it will be quite big. The project is really interesting though so I though that I could give it a try.
<hkaiser> Nikunj: you might want to think about carving out a smaller piece of this - once you start writing your proposal you could ooutline the whole thing and specify the sub-tasks you'd like to focus on
<hkaiser> jbjnr: tks
<jbjnr> also, look at the pull requests and see lots of interesting contributions
<Nikunj> @hkaiser: Ok. Could you please provide me with any starting links so that I could know where to begin my research?
hkaiser[m] has joined #ste||ar
hkaiser[m] has quit [Client Quit]
hkaiser[m] has joined #ste||ar
hkaiser[m] has quit [Remote host closed the connection]
hkaiser[m] has joined #ste||ar
kisaacs has joined #ste||ar
<hkaiser> Nikunj: the main idea is that we would like for hpx to be usable without having to go through a special main() function
<hkaiser> this project is very std-library (copiler) specific and might require seperate solutions for different environments
<hkaiser> it will require some research/code reading
<hkaiser> currently hpx requires special startup code (see https://stellar-group.github.io/hpx/docs/html/hpx/manual/applications.html)
<hkaiser> the current solution we utilize is to #define main, which is awkward at best and plain wrong if you're honest
<hkaiser> Nikunj: we would like to be able to just leave main() alone and things should work
<hkaiser> that probably requires to somehow modify/hook into the std-library application initialization routine (the one that is eventually calling main()) to additionally do the required hpx specific things
mcopik has joined #ste||ar
<Antrix[m]> hkaiser: Where do you suggest I should start for the project ?
<hkaiser> any time
<hkaiser> Antrix[m]: you asked where, not when... ;)
<Antrix[m]> hkaiser: where :P
<hkaiser> Antrix[m]: well, I think best would be to understand what hpx is, then you could start designing what an interface to hpx in python would look like
<Antrix[m]> hmm
<hkaiser> and then you can start implementing it
<hkaiser> there are those lua bindings, might be a good starting point to understand what's done there
<github> [hpx] hkaiser pushed 1 new commit to master: https://git.io/vA8cj
<github> hpx/master 8b0f81c Hartmut Kaiser: Merge pull request #3179 from STEllAR-GROUP/support_flecsi...
<github> [hpx] hkaiser deleted support_flecsi at b81a510: https://git.io/vA8Ce
parsa has joined #ste||ar
<Nikunj> @hkaiser: Ok, I guess I know where to dive in.
<hkaiser> Nikunj: have you ever looked at std-library code?
<hkaiser> libc? msvcrt?
<hkaiser> not sure if clang has its own
<Nikunj> @hkaiser: I have not, but I was thinking to dive into the code base of std-library.
<Nikunj> To better understand ways through which I can implement
<hkaiser> I think both, libc and msvcrt have hooks allowing to achieve something like that
<hkaiser> perhaps we can even do it without special startup code, just using a global object (like shown here: https://github.com/STEllAR-GROUP/hpx/blob/master/examples/quickstart/init_globally.cpp
<hkaiser> but that example would have to be integrated into the build system etc to make it usable
<Nikunj> @hkaiser: Using global objects would be a nice way. Can we not design custom hooks?
<hkaiser> shrug, depends on how complex/portable a solution would be
<hkaiser> global objects are non-intrusive, but prone to initialization sequence problems
<Nikunj> @hkaiser: Thats true
<Nikunj> @hkaiser: I can look into ways to incorporate the global object into build system
<hkaiser> Nikunj: might be a good starting point
<hkaiser> in the end, if this is successful, could change all existing hpx applications as we woulnd't need any hpx_main anymore
kisaacs has quit [Ping timeout: 240 seconds]
<hkaiser> Nikunj: so this would very much require some experimentation and some research
<Nikunj> @hkaiser: Yes, This way we can make cleaner looking codes which will be easier to understand
<Nikunj> @hkaiser: This project looks very interesting so I am willing to put as much effort as I can. In the mean time I will also look for other solutions. I will read on custom hooks and if we can implement it or not.
<hkaiser> Nikunj: very nice, thanks
jaafar has joined #ste||ar
<Nikunj> So, I think I know where to start. I will start working on it right away :)
<hkaiser> good luck, let's discuss things as you learn about things
<zao> Learning by charging headfirst into walls, my kind of approach :)
<Nikunj> @hkaiser: Yes, sure :)
<hkaiser> zao: that's the way we operate ;)
<K-ballo> is there any other way?
<Nikunj> @hkaiser: I have also applied for mailing list. Please add me to the mailing list as well if you deem fit.
<Antrix[m]> hkaiser: I am trying to build hpx on my computer. Is Boost 1.66 ok for hpx or I need to install V1.55 for hpx ?
<hkaiser[m]> should be fine
<Nikunj> @K-ballo: I will start my analysis for the project. If I find another way I will surely share here.
<zao> Nikunj: I think I've seen some previous correspondence from you on the list already.
<zao> Antrix[m]: Unless there's some particular platform bugs or newly broken stuff, any version of Boost higher than the minimal one tends to be fine.
hkaiser has quit [Ping timeout: 260 seconds]
<zao> We had some trouble on macOS a while ago where Boost was quite broken.
<zao> Also some problems with some Boost library that wouldn't build right in C++17, can't remember if it was ICL or some other one.
<K-ballo> all those should be documented in the prerequisites section of the manual
<Nikunj> @zao: Those were the emails I've sent. I however do not receive any daily-digest or emails corresponding to mailing list.
<hkaiser[m]> @Nikunj pls send another request
<Antrix[m]> zao: Thanks ? I am using ubuntu 14.04 which installs older versions of gcc and cmake. I am installing newer versions of those too. I am bit scared to softlink gcc 49 to gcc
<Nikunj> @Antrix[x]: I have built hpx using boost 1.66. Depending upon your system, you will receive a few warnings which you can ignore.
<zao> Antrix[m]: -DCMAKE_C_COMPILER=gcc-4.9 -DCMAKE_CXX_COMPILER=g++-4.9 are your friends.
<Nikunj> @zao: OK
<zao> There's also some "toolchain" files in the tree somewhere that demonstrate how to use even weirder compiler setups.
<zao> I strongly recommend against diddling in system bin/include/lib dirs, always a great way to break the box.
<Antrix[m]> zao: Oh, thanks a lot
<zao> As for 14.04, I know that feeling way too well :)
<Antrix[m]> XD
<Nikunj> @zao: I have sent a subscription request right now.
<zao> I (thankfully) don't have access to any of the mailing list machinery, but I'm sure someone will poke at it if needed.
<Nikunj> zao: Ok, I will wait in that case :)
Nikunj has left #ste||ar [#ste||ar]
EverYoung has joined #ste||ar
EverYoun_ has joined #ste||ar
EverYoung has quit [Read error: Connection reset by peer]
EverYoun_ has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
hkaiser has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
<Antrix[m]> HWLOC_MEMBIND_REPLICATE not declared in scope while making
<Antrix[m]> how do I fix it ?
<zao> Antrix[m]: We depend on a reasonably new version of hwloc, I believe.
<Antrix[m]> I have installed hwloc
<zao> You may need to build and install a newer one, and -DHWLOC_ROOT=/where/ever/you/installed
diehlpk has quit [Ping timeout: 248 seconds]
<zao> Hrm, that flag should be ancient if I read the docs right.
<zao> If you look at your CMakeCache, does the hwloc entries look sane?
<zao> And did you install the -dev package?
<Antrix[m]> hwloc version 2.0 installed
<zao> hrm, haven't tried 2.0 here yet, maybe they deprecated that flag?
<Antrix[m]> zao: hwloc 2.0 is the new stable release
<zao> K-ballo: do you know?
<Antrix[m]> -dev for ?
<Antrix[m]> hwloc-dev ?
<K-ballo> I have never tried 2.0, always 1.something
<K-ballo> it doesn't need to be all that recent 1.x either, that need was made optional (whereas some hwloc was made mandatory)
<K-ballo> if the build doesn't work with 2.0 then it might be worth a ticket, assuming it does work with 1.x
<Antrix[m]> which version do you use ?
<Antrix[m]> I have not installed any -dev packages. Which dev packages are you refering to ?
<Antrix[m]> hmm
<K-ballo> Antrix[m]: is that MEMBIND error the first error your build hits?
<K-ballo> the very first error?
<Antrix[m]> K-ballo: I will try using hwloc V1.10 recommended on prereqs page
<Antrix[m]> yes
<Antrix[m]> Yes Yes
<Antrix[m]> K-ballo: It is the very first error
<jbjnr> Antrix[m]: do not use hwloc 2.0 - it changes the way the memory heirarchy is laid out and breaks hpx completely
<Antrix[m]> jbjnr: I am trying with hwloc V1.11.9 now.
<zao> sounds like we should guard against it maybe?
<jbjnr> in hwloc 1.x a socket can be a numa domain and a numa domain has cores so the tree goes node->numa->cores but in hwloc 2.0 a numa domain can be shared between sockets and cores on same socket cn be in different numa domains - the heirarchy laces cores and numa domains as sibliings
<jbjnr> I can't remember the details
<jbjnr> zao: yes - we need a cmake check that throws an error if the user tries to use hwloc 2 until we fix it
<jbjnr> unfortunately, it needs a significant rerite of the tope class
<jbjnr> ^topo
<K-ballo> is there a ticket for that already?
<jbjnr> no. I should have done it, but hwloc 2 was only in rc stage, they must have released it for real ...
<jbjnr> I'll do it now
<hkaiser> we have a ticket for hwloc2, it says compilation is broken
<hkaiser> jbjnr: ^
<hkaiser> #3161
<K-ballo> ah yes, I only meant for rejecting 2.0
<jbjnr> oops. I just created another one. sorry
<jbjnr> just in case anyone cares ... http://cdash.cscs.ch/index.php?project=DCA
<jbjnr> pycicle is not tied to hpx any more
<jbjnr> phylanx?
<hkaiser> jbjnr: nice!
<hkaiser> yes, we'd be interested in using it
<Antrix[m]> my system's memory is exhausted while make !!
<Antrix[m]> Is there a work around to it ?
<K-ballo> make less
<hkaiser> make -j1?
<Antrix[m]> I think this problem is due to make -j4
<zao> You're going to use maybe 1-2G per thread when building source files, and up to 7-8G to link some silly tests/examples.
<jbjnr> make all partitioned vector stuff optional!
<hkaiser> jbjnr: still waiting for a PR from you doing this
<jbjnr> ok. I'll do that. I was hoping someone else would wave a magic wand and make it happen
<Antrix[m]> I only have 8Gs on this system
<jbjnr> you know. Elves and all that
kisaacs has joined #ste||ar
nanashi64 has joined #ste||ar
nanashi55 has quit [Ping timeout: 268 seconds]
nanashi64 is now known as nanashi55
CaptainRubik has joined #ste||ar
<CaptainRubik> @jbjnr : I have watched the workshop videos on Youtube that explains the HPX programming model and API and also went through the codes in lcos::local for understanding mutex and spinlocks. It's great that libcds devs have agreed to help us in integrating libcds with hpx. So could you guide me on the approach I should take now for writng a proposal for the project?
parsa has joined #ste||ar
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 265 seconds]
<jbjnr> CaptainRubik: Sure - but it's sunday night and getting late where I am - can you give me your email address and I'll send you a message tomorrow?
<jbjnr> you can email me at biddisco@cscs.ch if you prefer not to post it here
<jbjnr> or better still - use the hpx-users list so that I can reply there and anyone else who wants to bid for the GSoC project can also see it.
parsa has quit [Quit: Zzzzzzzzzzzz]
<jbjnr> I'll do that. post some detils to hpx list tomorrow
<CaptainRubik> Well I sent a mail some time ago on the mailing list. Seems it is inactive. shikhar.coder@gmail.com . Drop me an email when you are available. I would like to talk about this project further.
simbergm_ has quit [Quit: WeeChat 2.0.1]
<jbjnr> CaptainRubik: ok. I'll check list - in general yes - it's not used as much as IRC, but for long messages it's easier to compose an email than use this
<CaptainRubik> Alright thanks.
CaptainRubik has quit [Quit: Page closed]
mcopik has quit [Remote host closed the connection]
<Antrix[m]> I am trying to understand example codes. Please clarify if I am wrong. Locality is the particular thread. What is futures ?
<Antrix[m]> Sorry, locality is the particular machine
<Antrix[m]> there can be multiple threads in a locality
<Antrix[m]> futures is confusing
kisaacs has quit [Ping timeout: 276 seconds]
<jbjnr> Antrix[m]: what are not you understanding about futures?
<zao> Futures are a common concept among several libraries.
<zao> It represents a value that may not be computed yet, but when it is, you can get() it, or fail trying.
<Antrix[m]> jbjnr: What are futures ?
<Antrix[m]> How are they defined ?
<zao> Much of HPX's power is combingin and chaining together futures into webs of computation.
<Antrix[m]> that is why we wait for futures before doing something using those values. Now I get it
<Antrix[m]> I was reading the docs for fibonacci and hello world examples
<zao> With regular boring futures, you tend to block the thread when trying to get their contents.
<zao> In HPX, as our threads and futures are lightweight, when you block on a future, the runtime does other work on the OS thread you're happening to run on.
<Antrix[m]> are there non-boring versions of futures ?
<zao> In the end, when the result is ready, execution resumes without the HPX thread knowing what happened.
<zao> In the real world, you have a 1:1 correspondence between the operating system's thread and threads-of-execution.
kisaacs has joined #ste||ar
<zao> In HPX, we've got "green" threads, which let multiplex several threads-of-execution onto a small set of OS threads, interleaving their execution based on what's ready.
<Antrix[m]> > In HPX, as our threads and futures are lightweight, when you block on a future, the runtime does other work on the OS thread you're happening to run on.
<Antrix[m]> This other work is from outside the program in question ?
<zao> As they're cheap, it's way more feasible to block on something like a future in HPX than it is in the real world, as there you take up the resources of a whole OS thread.
<zao> Other work as in other HPX threads.
<zao> Which may do things like actually compute the value you're waiting on.
<zao> Say that you have two threads, one that computes a value and puts it into a future, and one that blocks on a future wanting to do something with the result.
<zao> Assume that you only have one OS thread servicing HPX.
<zao> Say that the second thread runs first, at the place where it blocks, the runtime figures out that something else needs to run to eventually unblock it, so the computation thread would be scheduled on the OS thread, suspending the one that's blocked.
<zao> At some point in the future (heh) when the value is set, the first thread is eligible for resumption.
<Antrix[m]> i see
<zao> Compared to regular futures, HPX ones have some fancy features to make them composable.
<zao> So you can do things like waiting for all or any of the futures in a collection to be ready, or simply chaining them together.
<zao> With things like: future<int> f1 = do_something_async(); future<bool> f2 = f1.then([](int x) -> bool { return x > 3; });
<zao> (take what I say here with a fistful of salt, as I don't really _use_ HPX, I just build it)
<zao> (if you were fancy, you'd use the word "monadic" :D)
<Antrix[m]> thanks zao
<Antrix[m]> monadic fits well here :P
kisaacs has quit [Ping timeout: 240 seconds]
kisaacs has joined #ste||ar
parsa has joined #ste||ar
kisaacs has quit [Ping timeout: 256 seconds]
<Antrix[m]> while running the code, how does the user specify how many os threads should be exposed to the code ?
<Antrix[m]> Or can one only specify the number of HPX threads for that program
diehlpk has joined #ste||ar
<zao> You can provide it on the command line as -hpx:threads=4, or via .ini files.
<zao> Or assumedly by diddling with the options you init HPX with, if you init it explicitly.
<Antrix[m]> hpx:threads represents OS threads or hpx threads ?
<Antrix[m]> zao: I suppose hpx threads are soft threads(like mpi threads) and os threads are actual physical threads
<zao> I believe it controls how many OS threads to shove into the thread pool backing HPX.
<zao> I should try using HPX some day, never managed to fit it into my projects due to amusing external constraints.
<zao> (32-bit plugins, other FunStuff)
<Antrix[m]> zao: I am confused now.
Smasher has quit [Read error: Connection reset by peer]
Smasher has joined #ste||ar
<Antrix[m]> ok, so get_os_thread_count gives -hpx:thread=x many threads for the process
<Antrix[m]> but then why does the code for hello_world say that hpx-threads are executed on OS-threads
<Antrix[m]> what is the difference between os threads and hpx threads ?
<zao> There's a bunch of OS threads in HPX's thread pool, which our scheduler runs HPX work on.
<zao> Gonna go off for the night, good luck :)
<Antrix[m]> zao: thanks :)
mcopik has joined #ste||ar
Anushi1998 has joined #ste||ar
anushi has quit [Ping timeout: 276 seconds]
Smasher has quit [Remote host closed the connection]
Smasher has joined #ste||ar
<hkaiser> Antrix[m]: hpx threads are lightweight user-level threads that are run on top of (kernel-)threads
<hkaiser> hpx creates one (kernel-)thread for each core and defines the affinities such that this kernel-thread is run only on a particular core
<hkaiser> you can think of hpx-threads as function calls performed by the kernel-threads
<hkaiser> except that each hpx thread has its own independent stack
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
quaz0r has quit [Ping timeout: 256 seconds]
quaz0r has joined #ste||ar
Smasher has quit [Remote host closed the connection]