<Yorlik>
That would suggest roughly 50us per call to get_ptr.
<nk__>
Yorlik, aah! didn't notice
<Yorlik>
nk__: You use get_ptr a lot?
<nk__>
Yorlik, not really
<Yorlik>
I had it inside my tight loop which made it really pop up.
shahrzad has joined #ste||ar
<hkaiser>
Yorlik: makes sense
<hkaiser>
I think I told you to be careful with get_ptr
<Yorlik>
Now I saw it with my own eyes :D
<Yorlik>
Dem evil fella ...
<hkaiser>
Yorlik: be aware that as long as you hold the shared_ptr you can't migrate the object
<Yorlik>
It's a raw pointer now.
<Yorlik>
Just a ref
<hkaiser>
not good
<Yorlik>
I think it's safe, where I use it
<hkaiser>
if you migrate it the pointer will become invalid, while if you hold it as the shared_ptr return from get_ptr this can't happen as the sp will delay any migration
<Yorlik>
Because it's inside the update loop of the entity, whch is a part of the gameobject anyways.
<Yorlik>
It is ussed essentially inside the gameobject, tzhough technically not.
<Yorlik>
The Object is tied to the entity which uses the pointer in its update
<hkaiser>
I would advise against holding on to the raw pointer
<hkaiser>
rule (1): no raw pointers!
<Yorlik>
I could store a gamobject reference instead.
<Yorlik>
Same thing, just different semantics
<hkaiser>
or the shared_ptr
<hkaiser>
well, the semantics is exactly the difference
<Yorlik>
If i ever try to migrate a gameobject which has an entity in the update cycle I have bigger problems.
<Yorlik>
They are like Siamese twins.
<hkaiser>
why not use the shared_ptr then and be _sure_ that nothing can happen?
<Yorlik>
It's bigger, though not much.
<Yorlik>
I want my entities be as small as possible.
<hkaiser>
it's bigger *sure*
<Yorlik>
Though alignment might make that argument moot
<Yorlik>
I store the id_type in the same struct.
<hkaiser>
readability and maintainability of code is more important than anything
<hkaiser>
abstractions are important!
<Yorlik>
As long as they are free i agree fully.
<hkaiser>
optimizations always break abstractions and make code worse
<Yorlik>
Like optimizing away get_ptr ? ;)
<hkaiser>
if somebody reads your code and sees a pointer the firts question is: 'hmmm, does this guy own the memory it points to or not?'
<Yorlik>
No - generally you are right
<hkaiser>
no way to tell
<hkaiser>
Yorlik: I'm always right ;-)
<Yorlik>
raw pointers never ever own anything in our code.
<hkaiser>
why not use references, then
<Yorlik>
If you see a raw pointer in my code you know its a reference non ownoing thiunbg
<hkaiser>
why not use references, then
<Yorlik>
I should, true
<Yorlik>
bad habit probably, I admit
<hkaiser>
whenever you're tempted to write a star after a type- think again!
<Yorlik>
pointers have a dubious attraction - dunno why,.
<hkaiser>
Yorlik: yah, they so nicely dangerous - you feel cool - admit it
<Yorlik>
Yea - it f... is fun - lol
<hkaiser>
until you cut your finger and start chsing memory bugs
<Yorlik>
You feel like a real engineer using them. lol
<hkaiser>
this attitude is the reason why people think C++ is bad
<hkaiser>
man - you're writing code that will serve its purpose for years, do yourself a favor and clean it up
<hkaiser>
(1) no raw pointers!
<Yorlik>
After the coming milestone I'll do a major cleanup and doc phase.
<hkaiser>
(2) no raw loops
<hkaiser>
(3) no raw threads
<Yorlik>
I'll chase down all üpointers, promised.
<hkaiser>
Yorlik: famous last words - haha
<hkaiser>
cleaning code after it was written is twice the effort, so it will never happen by definition
<Yorlik>
I have cleaned up a lot already. And really I'm trying to write clean code. My best friend is the unique_ptr.
<nk__>
Yorlik, I thought you were feeling like a "real" engineer a moment ago :P
<Mukund>
here I have attached both stdout and stderr
<Mukund>
Idk where the error is from
<Mukund>
and I would also like to know the min storage requirement for hpx
<jbjnr>
ms[m]: I'll try to reproduce it
<simbergm>
Mukund: does /home/mukund/open-source/hpx/my_build/bin/hpxrun.py exist?
<jbjnr>
Mukund: if you build all tests and examples, the build dir will be much bigger than 20GB
<jbjnr>
if you only build individual tests, all is fine
<simbergm>
in debug
<simbergm>
release should be smaller
<simbergm>
a lot smaller
<simbergm>
Mukund: you're trying to install into /usr/local which usually owned by root
<simbergm>
are you doing sudo make install or make install?
<simbergm>
you can install into /usr/local but that requires root privileges, or you can set CMAKE_INSTALL_PREFIX to something in your home directory which will not require root
Mukund has quit [Ping timeout: 240 seconds]
Abhishek09 has quit [Remote host closed the connection]
<simbergm>
btw, I just building everything we have in debug mode, the build directory is now 19 GB
<simbergm>
* btw, I just built everything we have in debug mode, the build directory is now 19 GB
Mukund has joined #ste||ar
Mukund has quit [Remote host closed the connection]
Abhishek09 has joined #ste||ar
ibalampanis has joined #ste||ar
ibalampanis has quit [Remote host closed the connection]
Hashmi has joined #ste||ar
<rori>
MukundVaradaraja: I managed to reproduce the error, I couldn't cp the hpxrun.py file into the /usr/local dir because the owner of the directory is root (output of `ls -l /usr/local`), please consider specifying another `-DCMAKE_INSTALL_PREFIX=<path>` to a location you know you are the owner of the directory (to see your username : `whoami`)
nikunj97 has joined #ste||ar
<kordejong>
When building in RelWithDebugInfo mode, I build HPX using a commit that contains some APEX fixes by jbjnr I am interested in (commit 8fe93d8). This works OK when not on a cluster (networking off), but I get an error when on a cluster (networking mpi, slurm). A release build with HPX 1.4.1 works fine. The error message is: `pthread_setaffinity_np: Invalid argument`. I have trimmed `--hpx:threads` and `--hpx:bind` from my
<kordejong>
command and the error stays. I have the feeling it has to do with sbatch / HPX interaction for this specific commit. It this something that is known? Should I maybe use another commit for improved OTF2 traces?
Abhishek09 has quit [Remote host closed the connection]
ibalampanis has joined #ste||ar
<ibalampanis>
Hello to everyone!
<ibalampanis>
Is anyone there could help me about gsoc?
<simbergm>
Kor de Jong: do you know if this happens on master as well? I'm not aware of any errors of that kind... and is this running one of our examples/tests or your own application? jbjnr might have seen it before
<simbergm>
do you get a stack trace or something?
<kordejong>
ms: I can try with master. Are the APEX fixes merged into master? I don't get a trace, just the pthread error message. And this is with running my own application. I will first try with the above mentioned commit and with an HPX example.
<simbergm>
Kor de Jong: unfortunately not merged into master
<simbergm>
in principle that PR doesn't touch anything to do with affinity, which is why it would be curious if it only shows up over there, but the changes may have secondary effects...
<simbergm>
jbjnr: btw, should we interpret your comments from yesterday that you're just going to abandon your open PRs? (please don't)
hkaiser has joined #ste||ar
Saswat85 has joined #ste||ar
ibalampanis has quit [Remote host closed the connection]
Saswat85 has left #ste||ar [#ste||ar]
Hashmi has quit [Quit: Connection closed for inactivity]
nikunj97 has quit [Ping timeout: 256 seconds]
kale has joined #ste||ar
nikunj97 has joined #ste||ar
<Yorlik>
hkaiser: yt?
<hkaiser>
here
<Yorlik>
Is there a way to use parloop, but run the chunks for yourself?
<Yorlik>
Like you have a function taking a begin and an end?
<Yorlik>
I realized diving in and out of Lua is pretty costly
<Yorlik>
Such a model would allow me to avoid that and do the partial looping inside Lua
<Yorlik>
So - HPX would just do the chinks and start the task
<Yorlik>
chunks
<Yorlik>
I would use HPX for the chunk division and the timing measurement only.
<Yorlik>
Instead of update(index) I would have update(start, end9
<jbjnr>
ms[m]: I spent weeks working on apex fixes. Addressed all comments and fixed everything I could, but my PR was not merged. I stopped caring.
<zao>
Yorlik: Sounds like a reasonable thing to have.
<Yorlik>
It would save me thousands of in and outs of Lua
<Yorlik>
I just realized a huge performance drop in raw iteration speed in the moment I started calling into Lua. Lua in itself is pretty fast again. And since we don't have a real FFI in Lua it's better to avoid a lot of ins and outs
<Yorlik>
Just calling an empty Lua Function on every iteration is costly. OFC I am doing checks to avoid that, but once an ovject is active (has messages) I need to dive imn
<zao>
nikunj97: Did you do a bunch of stuff with partitioning, or was that some other person?
<nikunj97>
zao, with my 2d stencil?
<nikunj97>
I essentially increased the grain size to hide the overheads of parallel_for
Pranavug has joined #ste||ar
<Yorlik>
Grain size a mystery is, to solve with measuring, indeed. ;)
<Pranavug>
Hello Everyone, I Pranav Gadikar have submitted a draft proposal for GSoC. I request the admins and mentors of ste||ar group to please review and suggest any changes.
<Pranavug>
Thanks in advance
<kordejong>
<biddisco[m] "ms[m]: I spent weeks working on "> That is unfortunate. Like you I think APEX+Vampir can be a great way to gain insights into what is going on in an HPX run at runtime. Are you using a separate branch were you keep your APEX fixes which I can use? I am willing to test things out if that would help getting the fixes merged into the main branch.
rtohid has joined #ste||ar
Pranavug has quit [Read error: Connection reset by peer]
Yorlik has quit [Read error: Connection reset by peer]
Yorlik has joined #ste||ar
nan8 has joined #ste||ar
bita has joined #ste||ar
akheir has joined #ste||ar
akheir has quit [Remote host closed the connection]
akheir has joined #ste||ar
<hkaiser>
Yorlik: still there?
<Yorlik>
Yes
<hkaiser>
the question you asked
<Yorlik>
Ya?
* Yorlik
is looking at parloop code right now
<hkaiser>
what I would suggest is to use a static chunking policy that fixes the number of generated chunks and dispatch to Lua where you can do whatever you like
<Yorlik>
Like giving Lua a fixed starting point and a fixed size?
<hkaiser>
I meant the lamda (loop body) dispatches to lua
<Yorlik>
The lambda is called for every iteration, right?
<hkaiser>
you don't have to iterate over the objects, you can iterate over chunks yourself using parloop, so that the lambda will be called once for each chunk
<Yorlik>
That's exactly what I was asking for.
<hkaiser>
well, just do it
<Yorlik>
Is it in examples somewhere how to write that?
<hkaiser>
you know how many chunks you want, so create a loop for 0...num-chunks
<hkaiser>
or use bulk_execute yourself, for_loop is just a fancy wrapper around the executor::bulk_async_execute
<Yorlik>
I'll look into that.
<Yorlik>
There is another open strategy question I have not finally decided yet which would play very nice with this:
<Yorlik>
The core data structure fundamentally is c-array like
<Yorlik>
Inside this array are slots which are deleted objects whiuch are marked as deleted
<hkaiser>
ok
<Yorlik>
So I can zip over the array, just skipping inactive slots, still using cache friendliness of this approach
<Yorlik>
But
<Yorlik>
I might add one level of indirection
<zao>
Yorlik: You should see my current codebase, O(n^2) loop over 10M entries :D
<hkaiser>
Yorlik: what's your question?
<zao>
Spun off one task per interaction, which was like three FMAs and a bunch of trig, waaaay too fine-grained.
<Yorlik>
I am thinking of adding each object reference of objects that have mail to a vector, sort that vector by slot and zip over that vector instead.
<hkaiser>
ok
<Yorlik>
It's additional work, but in practise many objects in the array will either be deleted and in the freelist or they might have no messages, since dormant / no players around
<Yorlik>
Using a sorted vector of references i could probably drastically speed up iteration in a real world situation
<hkaiser>
why sorted?
<Yorlik>
And since sorted still benefit cache as much as possible.
<Yorlik>
Not sure if it would be worth it
<hkaiser>
measure!
<Yorlik>
The problem is I do not yet know enough about the proprties of my workload and it might actually change at runtime a lot
<Yorlik>
So . I need to create a felxible, but still fast system.
<Yorlik>
It's easy to overengineer here or to loose a lot of performance.
<Yorlik>
I need to let it grow organically and be adaptable.
<hkaiser>
you'll never know without measurements
<Yorlik>
I think I'll write a simple proptotype game that reflects realistic conditions
<hkaiser>
perf-counters!
<Yorlik>
Yes
<Yorlik>
I had already connected perf counters to grafan / kibana
<Yorlik>
We will use this extensively
<Yorlik>
I think I'll finish the scripting side of things first, make a prototype simulation and then measure
<Yorlik>
The good thing is it's all nicely decoupled - I can easily change scheduling / looping and stuff.
<Yorlik>
But I'll look into this bulk execution thing - it sounds bery reasonable.
<hkaiser>
bulk_async_execute is a customization point for executors, it'sused by all of our parallel algorithms, so if you want to have something truely customizable - go check it out
<Yorlik>
The raw iteration before I started using Lua was good however - I have a good basis for whatever optimizations.
<kale>
diehlpk_mobile[m, nikunj, rtohid, zao : I've created a second draft for my GSoC proposal for the Pip package for phylanx project. I've updated the solution according to the cons and issues I found in the my first solution. I would be glad to to hear your opinions about the proposed soultion and possible imporvements on the soultion.
<kale>
*improvements
<zao>
kale: you forgot blaze_tensor in the list of prereqs and have misspelled "phalanx" :)
<nikunj>
kale: it looks like you've updated your implementation. You didn't quite explain why you shifted from your previous implementation?
<zao>
kale: Did you consider manylinux2010? CentOS 6-based and is supported by pypi.org. Might not matter too much if you intend to install a newer toolchain anyway.
nikunj97 has joined #ste||ar
<nikunj97>
kale, zao is right. You should consider manylinux2010 for your build
<nikunj97>
kale, this looks like a much cleaner approach to me at first glance
<nikunj97>
do you have a github link to the toy project you're trying to explain? I'd like to test it on my local machine for it's authenticity
<Yorlik>
zao: Concerning your remark above: You always wanna be cache friendly - something you easily screw up when misusing a task system. In general applications, IIRC, you have like 50% wait time for memory.
<kale>
nikunj, The problem with my previous implementation was the runtime and computation it required on the user end. My first solution was to invoke cmake scipt for respective dependencies if they aren't installed within the system. FIrst of all the dependencies are large and building all of them is redundent since a phylanx user will never use these
<kale>
libraries.
<nikunj97>
so you came up with the idea of pre-shipped libraries
akheir has quit [Read error: Connection reset by peer]
<kale>
nikunj, I've tested it on my machine. I'll submit the github link for the toy-pip package by tomorrow
akheir has joined #ste||ar
<nikunj97>
kale, how are you handling package dependency redundancy btw?
<kale>
@za
<kale>
zao, Thanks for pointing out that manylinux2010 is centos-6 based. Since I am going to build the complete toolchain by myself into the docker so I don't think that will change anything.
diehlpk_work has joined #ste||ar
<zao>
Does HPX have any TLS implementation or was the crypto all homegrown libsodium?
<nikunj97>
kale, after going through the proposal I feel that it is inline with what I expect from the project.
<zao>
kale: I'm wondering if there might be some external dependencies that might be hard to avoid, like the MPI implementation.
<zao>
Like how the Fedora HPX packages come in three different flavours depending on what MPI implementation they use.
<zao>
Whether this is in scope and matters, I don't know :D
karame_ has quit [Remote host closed the connection]
<kale>
zao: I will be builing HPX with TCP and not MPI to test phylanx. Once I complete the project, I will try to build it with MPI.
<hkaiser>
zao: what has TLS in common with crypto?
<Yorlik>
I wonder if HPX could profit from a UDP base transport with different ack mechanics than TCP.
<Yorlik>
Some sort of reliable UDP
<hkaiser>
Yorlik: man - stop prematurely optimizing things
<Yorlik>
:(
<Yorlik>
Sure - you have low latencies in a datacenter anyways.
<Yorlik>
But think of a gameserver cluster distributed worldwide
<Yorlik>
Send an ack for a sequence of packets and RESEND everything not yet acked.
akheir has quit [Read error: Connection reset by peer]
<Yorlik>
So you don'Ät wait for acks
akheir has joined #ste||ar
<Yorlik>
Just one of many ways to tweak it.
<hkaiser>
or combine those messages into one tcp operation and get the same
<hkaiser>
Yorlik: you can gain much more from overlapping computation with networking than from reducing latencies in the networkin gitself
<hkaiser>
hide latencies - don't avoid them
<Yorlik>
That works only to some extent - but yes - hiding latencies is important in a game.
<Yorlik>
E.g. for networked physics or combat you don't want to just-hide.
<hkaiser>
trigger the objects that require networking first, then do the rest of your updates, then receive all messages for the next round
<hkaiser>
I'd say - exhaust latency hiding, then start thinking on how to reduce the ramaining critical spots
karame_ has joined #ste||ar
<Yorlik>
Its a big topic. We can't really discuss it in depth by typing - we'd get sore fingers soonish. But there's a reason, why people use UDP in gaming and not TCP - because it sucks.
<hkaiser>
they use udp because hiding latencies is a pita with conventional programming models, reducing latencies is all they know
<Yorlik>
And having inter-server object interaction requires low latency - when in a single datacenter it is a non issue though - so - the "premature" judgement is valiud for that (inter-node communication)
<hkaiser>
and out of a false sense of 'need for speed', certainly
<hkaiser>
Yorlik: nonesense
<Yorlik>
It depends a lot on the type of game.
<Yorlik>
What is nonsense?
<hkaiser>
you can have minimal latencies and still suck if you're not able to properly respond
<zao>
hkaiser: Sorry, TLS as in what OpenSSL provides.
<Yorlik>
Too true. But you still need the low latencies
<hkaiser>
asynchrony combined with parallelism is the key
<hkaiser>
that can get you a long way
<zao>
I knew HPX had some sort of security and might've used crypto functions from there.
<zao>
But that (Vex's code?) was all with libsodium in the end I think, if it even survived.
<Yorlik>
hkaiser: Did you see gafferongames' videos on networked physics?
<zao>
(so not thread-local-storage)
<hkaiser>
zao: VeXocide had done some security work in HPX using libsodium a long time ago
<hkaiser>
Yorlik: no
<Yorlik>
hkaiser: This is some of the best stuff you get on the net on the topic: https://gafferongames.com/
<hkaiser>
reducing latencies is like fighting fire with gasoline
<hkaiser>
it's the wrong focal point
<Yorlik>
Its one of several points. There is no silver bullet.
<hkaiser>
the best you need in terms of networking interaction in games is that the respons has to be back during the next frame
<Yorlik>
That is 16 ms in modern games
<Yorlik>
In an MMO you might get away with 100 ms
<hkaiser>
so see
<hkaiser>
what's the point in shaving a ns from latencies then?
<hkaiser>
overlay with computation - YES
<Yorlik>
Got a dev meet now ... Going to be a bit unresponsive.
<Yorlik>
BBL.
<diehlpk_work>
hkaiser, I tweeted about the HPX Community Survey Summary
<hkaiser>
great!
ibalampanis has joined #ste||ar
<hkaiser>
nan8: pls add -DHPX_PROGRAM_OPTIONS_WITH_BOOST_PROGRAM_OPTIONS_COMPATIBILITY=OFF while configuring HPX
<nan8>
hkaiser Thanks. I will add it
<bita>
simbergm, , I have reopened #1112 as hkaiser mentioned #1140 does not supersede it (and we don't forget about it). I don't know much detail of it though
<hkaiser>
bita: thanks, that's fair enough
kale has quit [Ping timeout: 240 seconds]
akheir has quit [Read error: Connection reset by peer]
akheir has joined #ste||ar
K-ballo has quit [Ping timeout: 240 seconds]
akheir has quit [Read error: Connection reset by peer]
akheir has joined #ste||ar
<ibalampanis>
Hello bita, I would like to make a question about gsoc
<Yorlik>
OK back. hkaiser: Concerning shaving away time from latencies: We're not talking ns but ms, not even us. But again: inside a datacenter it's a non issue.
<hkaiser>
Yorlik: let's talk later, in a meeting
<Yorlik>
OK. I'm around.
<nan8>
hkaiser I could build and pass distributed tests now by adding that line. Thanks!
<ibalampanis>
bita, hkaiser, is it compulsory to implement a MM in HPX for the gsoc? I'm interested in "Test Framework for Phylanx Algorithms" project
<ibalampanis>
Could I make one with Phylanx?
<hkaiser>
nan8: excellent!
<hkaiser>
ibalampanis: as long as you just do it by saying A * B, where A and B are blaze matrices ;)
<hkaiser>
*as long as you _not_ just do it*
<ibalampanis>
To be honest, i have not understand it! hkaiser
<hkaiser>
ibalampanis: doing a MM with blaze is a simple A * B, that's not what we want you to do
Hashmi has joined #ste||ar
<ibalampanis>
hkaiser I apologize for my misunderstood situation. I cannot understand the meaning of blaze word.
<ibalampanis>
Shame on me. Thank you! You are very kind hkaiser
Abhishek09 has joined #ste||ar
Abhishek09 has quit [Remote host closed the connection]
<zao>
ibalampanis: The purpose of the sample is to demonstrate that you have some sort of idea how to use the languages and libraries involved in the project.
<zao>
A matrix-matrix multiply is straightforward enough mathematics-wise while still needing a bit of programming awareness, and in the HPX case forces you to actually use the library a bit.
nan8 has quit [Ping timeout: 240 seconds]
<ibalampanis>
zao you for your response!
<ibalampanis>
thank you*
Abhishek09 has joined #ste||ar
ibalampanis has quit [Remote host closed the connection]
<diehlpk_work>
Defense of Maxwell is starting soon
<bita>
diehlpk_work, we are waiting for the host I guess
<hkaiser>
bita: Maxwell is the host
<bita>
Are you connected? Mine is not connected because Maxwell has not started it I guess
<hkaiser>
same here
<diehlpk_work>
same here
akheir has quit [Read error: Connection reset by peer]
akheir has joined #ste||ar
nan2 has joined #ste||ar
rtohid has quit [Remote host closed the connection]
Abhishek09 has quit [Remote host closed the connection]
<bita>
ibalampanis, and which algorithm do you want to use on it?
<bita>
I see it is uploaded 3 years ago, and was operated on last year the last time. can you use a newer and larger dataset?
akheir1 has joined #ste||ar
Hashmi has quit [Quit: Connection closed for inactivity]
akheir has quit [Ping timeout: 265 seconds]
ibalampanis has quit [Ping timeout: 240 seconds]
<bita>
As ibalampanis is disconnected frequently, I am chatting with him in the emails
akheir1_ has joined #ste||ar
akheir1 has quit [Ping timeout: 252 seconds]
<Yorlik>
hkaiser: getting this error when compiling Current HPX stable on Windows:
<Yorlik>
\hpx\stable\libs\topology\include\hpx\topology\topology.hpp(27): fatal error C1083: Cannot open include file: 'hwloc.h': No such file or directory
<Yorlik>
The file is there.
<Yorlik>
The following variables are set (correctly - I checked):
<zao>
hpx_init indeed doesn't pull in the include directory in this version.
<simbergm>
let me do some digging
<Yorlik>
WTFFFSOMGLOL
<Yorlik>
;)
<zao>
I'm gonna let VS finish hanging and see how master fares.
<Yorlik>
Time for more coffee, while the HPX emergency response team is digging :)
iti has joined #ste||ar
weilewei has joined #ste||ar
<zao>
simbergm: master is also borken.
<iti>
Hello everyone, I have one question, will it be fine if I share the link to the hpx toy application after the proposal submission? Since, I am was completing the proposal and it took more time then I expected. I have however tried some small codes from tutorial (and documentation, I still need to try some more functions to get comfortable though.
<hkaiser>
sure, feel free
<iti>
Thank you ^^
<zao>
In fact, "master" is "stable".
<Yorlik>
So they say ... ;)
<hkaiser>
Yorlik: it wouldn't be stable if it had'nt passed the CI
<Yorlik>
ROFL ... the allmighty Ci ...
<simbergm>
"stable" is perhaps a bit of a misnomer
<simbergm>
it means it passed the linux builds on circleci
<zao>
"shipit" :P
<Yorlik>
Silicium where the Lunix shines oh so bright ... :)
<Yorlik>
Some needs to sponsor HPX a damned Windows CI.
<Yorlik>
Does HPX have any corporate supporters?
<simbergm>
Yorlik or zao can one of you try adding a `target_link_libaries(hpx_init PUBLIC hpx)` around here?
<zao>
In what file?
<simbergm>
that's the thing, we have both github actions and travis building on windows and they didn't say a damn thing
<simbergm>
yeah, but I'm wondering what included topology.hpp, and what included the file that included topology.hpp, and so on until the hpx_init source files
<zao>
There's /showincludes
<zao>
Err, /showIncludes
<simbergm>
although, finding that out is just for my curiousity, the fix is to link hpx_init to hpx, which it should be doing anyway