Hello Blender Wizards!
I am seeking your help in trying to solve a GIS problem using Blender. Any help, pointers or general discussion related to this will be highly appreciated.
I am a Blender n00b but I am aware that Blender has a GIS plugin that helps in creating cityscapes by capturing terrain, buildings etc. from GIS maps. Suppose a city with 3D buildings, parks, lakes has been created. Now, I need to find all dwelling units from which a particular park/lake is visible.
GIS has something called a viewshed analysis which can be used to find area which will be visible from any given point. But that is the limitation, it just gives the view from a point, not a whole area.
My idea is to create stack dwelling units (apartments in high rises) as white objects having unique Object IDs in Blender and parks/lakes as colored light sources. Upon rendering, it is easy to see what dwelling units are lit up in which color. That is all good for visual analysis.
My question is, is there any way in Blender to get Object IDs of Objects that have non-white colors on their face? Or do I have to take the help of a Gaming Engine for this?
Looking forward to the responses. Cheers!
@DontNoodles As I understand it, you want to:
-use OSM
-to import GIS into Blender
-with the BlenderGIS add-on, I take it?
-then use color info from that
-to drive selections
-for making buildings in white areas, populating green areas with trees, populating blue areas with water, etc?
I’m with you till the third bullet point. Thereafter, I’m not planning to use the color info from the OSM/GIS tool. Instead, what I’m suggesting is:
I hope this rewording makes my query more understandable. English is not my first language.
Not a Blender user myself, but these answers are general CG concepts. What you want is lighting baking. You can cook the lighting down to a UV-mapped texture. Then you bake out the IDs on the same UV coordinates and you only have to compare the two.
The other method that I have employed for a similar question using Houdini is a directional variant of Ambient Occlusion.
You can do this for each method to see how much of each feature is visible to each part of the buildings, then just store the floor number on each point as well, and bada boom, you have your mapping. Just have a shader sample the point values and bake it out to a texture.
There are many advantages of an AO model over trying to use Light Baking to get the same info. Primarily, speed. You don’t need nearly as many sample points on either end to calculate it. Secondly, you get much more control over the details and can extract statistical information about the visibility. You can sample values on each sample point that can be aggregated and interrogated while it is doing all of the AO calculations. I can go on about this, but it would likely only become relevant once you saw how it worked. As a for instance, you can place indexes on points in a park to represent points of interest, like a fountain or gazebo, and then as the visibility is being sampled you add the index to a set if the ray is successful. Boom, now you also have a mapping of not only how much of the park that spot can see, but also which points of interest it can see with essentially 0 increase to calculation time. To do the same with lighting baking you would need to do a separate render. Also, for lighting you have to worry about falloff on the light, so it becomes difficult to use over a certain distance.
I totally get your point as to how it may be faster. Let me read up about ambient occlusion and since I’ve never worked with Houdini, whether I can implement the same using any of the tools that I’m conversant with. Thank you for making me aware of this.
Pretty sure Blender has a Python API. And AO research on its own probably won’t yield much juice for what I described. The topic of limiting the scope of AO calculations to calculate other things is kinda not a thing. I actually used it as the subject of my Master’s thesis. I was using it to calculate the exposure of scenes to the solar ecliptic throughout the year so I could calculate fading from direct sun exposure on textures. That is why I shared a step-by-step instead of a link. The principle for what you want to do is the same though. Measuring the exposure of geometry against some other geometry.
From what I just read, you want to use the Scene.ray_cast() function. Usage should be straightforward.
For point in buildingPointCloud: TotalHit=0 For destination in parkPointCloud: hit, _, _, _, _, _ = Scene.ray_cast(depsgraph, point, destination-point, length(point-destination)) TotalHit += hit Visibility = TotalHit/len(parkPointCloud) FunctionToStoreVisibilityOnPointOrInList(Visibility)
That should be enough to get you started. I am 99% unfamiliar with the Blender Python API, but this should get you there if you are even remotely experienced with it. There are obviously optimizations that can be done as this is very brute force, I was just trying to illustrate the basic loop.
Thank you, I’ll explore along these lines.
I would be very curious to know what you come up with.
Currently, I’m trying to find a lazy (for me) way out. I’m learning to bake the lighting on objects and figuring out ways to iteratively do it for all objects of choice (buildings) in a scene automatically. Thereafter, i hope to do image processing on the unwrapped light baking maps to detect the desired colors. It should be possible to crop these images to detect light on individual faces and or find the percentage of area exposed too.
If this does not work out, I’ll take the progressively difficult ways suggested in the thread as I learn and become comfortable with what you guys have kindly given me pointers for.