-
- EXPLORE
-
-
-
-
The Public 3D Asset Library
Recent Updates
-
BLOG.POLYHAVEN.COMDev Log #22OK, so its been a little more than two months since the last Dev Log, but this time, weve got a little more than just progress updates to share!Namaqualand is out!After more than a year in the making, were finally happy to share our latest asset collection!A rocky South African Desert, blooming with wildflowers and succulents.Ive written a more detailed blog post about the trip and the project as a whole as well, which you can read here: https://blog.polyhaven.com/namaqualand/Browse the assetsDownload the scene fileRead more about the projectQwantani HDRI TripAt the Southern point of the Sterkfontein Dam, just shy of the Drakensberg mountains, is a reasonably sized hilltop that sits a short 10-minute hike from a holiday resort I visited several years ago: Qwantani.Jarod and I revisited this location in August for a week with the sole purpose of capturing more HDRIs from the hilltop.The weather was mostly clear the whole week, which is not super exciting, but we did at least capture what we came for (including testing out our new astro lens).Were still working on stitching all the HDRIs, but youll see them released over the next few weeks.Into the Deep End of Game DesignDario, Rico, Greg, Jenelle, JamesIn a blog post I wrote earlier this year, I explained a bit about the idea of Poly Haven getting into game development. But in a nutshell:Were exploring the idea of making video games as a means to prove the usefulness of our assets to show that the assets we make can indeed be used for the purposes we intend them much like how the Blender Studio creates open movies to prove and improve Blender itself.At the same time, wed like to explore if we can use this as a potential method for funding the creation of our 3D assets. Funding a team that makes free content has always been one of our biggest challenges (I did a talk about it a few Blender Conferences ago), and were wondering if selling video games might be a good way to support this.So, starting in October, we jumped into the deep end of game design: a 5-week sprint to make a game, any game, and get a clearer picture of what this idea might look like (and how much were in over our heads).The game that we make this month will be shared exclusively on Patreon to give us some stakes and accountability. Since this is more of a learning experience than a real game project, we wont be publishing the game anywhere else.No one on the team has any experience making games. Working on assets for games, yes, but nothing remotely connected to the task of designing and implementing gameplay at all.We dont intend to become game designers ourselves. Poly Haven, at its core, will remain an asset platform. Rather, we want to be able to relate to and better work with designers and developers we hire or collaborate with in the future.The broad idea is this: Learn a bit about game development this year (through this 5-week sprint and a jam or two) so that next year we can hire some talent and try to make something worthwhilethat suits Poly Havens mission too.Other ProjectsThose are the three big ones, but to give you an update on the other things happening in the background:The moon scans are still in the works, weve made some progress but plan to tackle it fully after the game sprint.The fabric scans are almost there, were just waiting for a final few scans before we can set up the materials in Blender and upload them.My Blender patch for the asset drag&drop handler still hasnt been reviewed, however another similar patch landed which ads a handler for append/linking in general. This could feasibly be used instead for all the features of our add-on that were relying on my patch, so Ill be looking into it soon.Were still planning to recategorize all our models, but since this may have some unknown impact on users on our site and in our add-on, wed like to wait and figure out new categories for textures and HDRIs first too.0 Comments 0 Shares 152 ViewsPlease log in to like, share and comment!
-
BLOG.POLYHAVEN.COMNamaqualandDownload the scene fileBrowse the assetsAfter more than a year in the making, were finally happy to share our latest asset collection: Namaqualand!Browse the assetsMicro-biomes of South Africa (Source)In September 2023, the Poly Haven team embarked on a journey to the arid desert near the West coast of South Africa specifically to the Goegap Nature Reserve.For a handful of weeks in the year, what is usually a dry and sandy landscape scarred by mining activity transforms into a floral paradise following the spring rains.While we chose this location in the hopes that the sparse desert would be a unique but manageable scanning project, we couldnt pass up the opportunity to try to capture what even local adventurers rarely get to see.As luck would have it, we nailed the timing and saw the landscape covered in innumerable flowers.Jenelle, James, Dario, Jarod, GregAfter surviving the 12-hour drive from Joburg and resting for the night, we kicked off the project with a scouting hike on the first morning to get the lay of the land and begin prioritizing what we needed to capture.That evening, after a gorgeous sunset, we sat down and planned the rest of the week.We were told that the lodgings were completely without electricity, so we came prepared with many spare batteries for all our cameras and flashes, power banks, a small solar-charged battery, and some adapters to charge devices using our 44 SUV as a generator.In the end, though, our unit had a small solar system that provided just enough power to run the fridge and charge our camera gear. We still had to use this sparingly and try to charge as much as possible during the day.While much of the photoscanning happened in the field (literally), we also brought many of the smaller rocks and other debris into a spare bathroom, where we had assembled a black velvet backdrop to scan against with a turntable.With the combination of a black velvet backdrop and a cross-polarized ring flash setup, we can scan objects in the void, producing very high-quality geometry and surface textures with minimal data preprocessing.The ring flash setup, however, is not powerful enough to scan outdoors in direct sunlight, which, of course, is plentiful in the desert.So, we also brought with us our Elinchrom ELB 500 rig, which is strong enough to overpower the sun and also significantly more ergonomic than our ring flash.The final photogrammetry rig we utilized is a motorized single-axis gantry that, in combination with the ring flash, partially automates scanning a 2-3 meter wide area and removes some of the potential for human error.While certainly not necessary, we wanted to test it out before our trip to #TheMoon, which couldnt be scanned any other way and indeed proved invaluable.Its no surprise to anyone that trying to scan plants with photogrammetry is usually futileThis aint gonna work.So, we also captured many photometric stereo scans from plant debris.Most of the plant assets were then created dynamically with Blenders geometry nodes, using a combination of various scan data, references, and many months of hard work.All in all, were very proud of what weve achieved here and the scope of assets that we managed to capture and reproduce. Were happy to share a small slice of our home country with you, and hope that you make something beautiful with our work We learned a lot in the process, both about scanning in the field, how to better prepare next time, and how to process the data and build the assets more efficiently.It may yet be some time before we take on another nature biome project, knowing how much work this has been and how the next one is likely to be even more ambitious. But in the meantime, we have a number of other projects cooking, so stay tuned to this blog and join our Discord to come along on our journey with us.0 Comments 0 Shares 198 Views
-
BLOG.POLYHAVEN.COMDev Log #18Fabulous FabricOne of the areas lacking in our texture library is fabric materials.Some years ago, Rob created some photometric stereo scans (what were starting to call light scanning which is less of a tongue twister and harder to confuse with photogrammetry) of fabric materials, which you can find here: https://polyhaven.com/textures/fabric?a=Rob%20TuytelThis is a good start, but if youve ever visited a fabric store (or taken a look inside your closet) you know there are hundreds of types of fabric that were missing.Photogrammetry is usually our primary method of capturing surface geometry and building textures, however, it has its limitations. When there is very little (relatively speaking) height variation in a surface, say for example a polished wooden tabletop, or any fabric, then you end up with a lumpy mess where the photogrammetry tool imagines height details from the image noise or simple errors in calculation.Lumpy scanned normals vs B2M faked normals. Both not great.Essentially, you have to throw away the geometry and use the photogrammetry software as a glorified panorama stitcher, and then create the normal map/height map some other way.Going with the age-old bitmap to material (B2M) technique of fudging the albedo map until you get something that looks a bit like a height map is something we want to avoid.A more accurate method is to keep the camera still and capture multiple images with different light angles, feeding them into a tool like Details Capture to compute the surface normals based on these known light angles.It, too, is an approximation, as things like material color, reflectivity, and translucency can throw it off, but the results are far better than B2M or photogrammetry.B2M (worst)Photogrammetry (eh)Light Scan (pretty great)One challenge with light scanning (photometric stereo) is that youre limited by the resolution of the camera, since the camera doesnt move around like it does in photogrammetry, rather the light moves instead.Even the highest resolution consumer cameras dont reach 8K on the vertical axis.To get around this, you can shoot multiple light scans in a grid pattern with a bit of overlap, process them as separate textures, and then stitch them together again like a panorama. Doing this in a 22 grid with our camera results in slightly more than 8K resolution.PTGui, the software we use for stitching HDRI panoramas, happens to also work amazingly well for this task too as its designed to counter distortion curvatures, blend seams dynamically along natural edges in the image, and work with many gigapixels of data. Its JSON-based project files (or templating system) also make it trivial to stitch the same panorama for both the albedo and normal maps.Side note: We played around a bit with pixel shift as well which can theoretically reach about 12K resolution without moving the camera, however, this tech is very sensitive and we couldn't reliably capture sharp images. In many of our tests, the combined pixel shift image was slightly more blurry than even a single of the lower resolution 16 brackets despite having more pixels. We should be able to solve this with more rigid setups, but then we also have to deal with inconsistencies in flash exposures as well. That could be solved with high-power video lights, but we moved on to the panorama method before investing in this.Were starting to design a workflow around this to use at scale, so we can capture dozens more fabric materials and process them as painlessly as possible. Once were happy with it, well document the workflow on our wiki (like we have all our others) and share it for you to learn from and improve upon.Well likely take the last few months of this year to plan and test things further, and then begin raiding fabric stores early next year.Wood Workshop VisitWood, one of our most popular texture categories!Wood textures are something you can never really have enough of. There are dozens of different species commonly used for furniture and construction, and a seemingly infinite number of ways to lay out planks in patterns for flooring or fences. And what about stains and finishes, roughness and polish, live edge and composites ?While more than 10% of our texture library today is made up of wood materials, Id say we havent even scratched the surface.This is actually the primary reason why we built the C-monster, and we finally got to put it to its intended use last week:We scanned a few veneer boards and piles of sawdust, and will begin processing them later this year when our to-do list is a little shorter. If those work out well, we plan to return to scan a few more types of wood.NamaqualandOur trip to the desert last month was a huge success!It was a bit more lush than we were expecting, but that also meant we could capture a lot of nice little flowers to make things more interesting.We havent yet parsed through our data completely, but Id estimate we have about 50 good assets waiting for us to process.Were also working on a behind-the-scenes video documenting the whole trip, which should be a fun watch Sadly we need to finish up our other projects before we can start work on this collection, but Im already looking forward to seeing the results.Little FallsSpeaking of other projects, Little Falls is next on our list at the moment to finish up. Progress is slow but steady, as weve been preparing for another top-secret mission you can read about on Patreon In the meantime, here are some tasty renders that Rico threw together with the assets hes working on:Hidden AlleyJamess progress on the Hidden Alley scene is coming along great! We should have some more to share soon, but for now here are some sneak peeks:Rigging StandardsYou may have noticed a number of our models have been updated with rigged versions, and affixed with a bone icon in the library:This is thanks to efforts started by Yann Kervran, who has been donating his time to help us rig existing models and come up with good standards for future rigs.A focus for the rigs is to try and be software-agnostic and unopinionated, allowing changes to control elements without needing to redo any mechanical parts of the rigs.Every animator likes their rigs to work a certain way depending on the motion they have planned, so our approach is to create a rig that is broadly useful, but still adaptable.Collection PagesFinally, I spent some time improving the layout of our collection pages, adding a nice image banner and some links for each collection.You may also notice a brand new collection: The Pine Forest.This combines several of our pine and fir tree assets, plus related textures and rocks, into a convenient bundle, also making the scene that Rob created for the project available for download.Well thats it for this Dev Log! Next time well talk more about our top-secret mission and share more progress of all our ongoing projects0 Comments 0 Shares 266 Views
-
BLOG.POLYHAVEN.COMDev Log #19Welcome to 2024! We have some exciting plans for this year, but before we can make any grand announcements we need to finish what we started and polish off our previous projects The Hidden AlleyAnother wildly successful community project! You can read about it here, head straight to the assets page, or download the final scene file.Well certainly set up another community project later this year, but for the next few months we need to focus on some other popular projectsLittle Falls Verdant TrailOur latest environment project is just about complete! We decided on a final name for the collection little falls was simply a working title and no longer suitable since there are no falls in our finished scene instead we decided Verdant Trail is more in style with the theme.Heres a small sneak peek, well share more in a few weeks once all the assets are uploaded:Fabric PlansLast time I shared a bit of our big-picture ideas for scanning fabrics at scale.Since then weve been researching some other approaches and testing out a few different workflows that might work at the scale we want, without compromising on quality, nor spending 99% of our effort on 1% gains.Were still ironing out the kinks (though not yet literally), but our plan is to construct a variant of the Cornell box so that we may capture solid physical reference of each fabric sample and use this to aid in the replication of material properties in Blender after weve digitized it with the photometric stereo workflow.This may not be the final design, but the idea is to keep it simple while still providing as much information as possible to aid in discerning the subtle view-dependant material properties that are critical for creating a convincing fabric material such as sheen weight, roughness, subsurface scattering, etc.The fabric sample will be wrapped around a polystyrene sphere and impaled with an aluminum rod to support it on the back wall of the box, like a large leathery lollypop.A two-point light setup gives you a strong, but not too harsh, key light and a rim light in predictable locations.We forego the red and green walls of the traditional Cornell box to avoid confusing the hue of the fabric sample itself. Instead, half the walls are painted white to bounce some light to the underside of the ball, and the rest are dark to keep enough contrast for the diffuse gradient to remain visible.The 18% grey sphere and Macbeth color chart provide an anchor point for exposure and white balance respectively, ensuring we can calibrate the surface albedo accurately.#TheMoonPrior to the Blender Conference, we visited a lunar simulant facility in Rostock, Germany.You can hear more about the project in my lightning talk below, but in a nutshell, the aim is to create relatively small-scale textures of the lunar surface as accurately as we can without actually visiting the moon ourselves.Well likely only get to processing these scans later this year, as we want to first complete the Verdant Trail and Namaqualand collections.Q&A StreamFinally, we hosted a stream at the end of the year to answer some community questions and share some updates and plans:0 Comments 0 Shares 278 Views
-
BLOG.POLYHAVEN.COMA Verdant TrailDownload the scene fileBrowse the assetsWere proud to present Poly Havens latest asset collection: A Verdant Trail.Browse the assetsThe goal of this project was to explore the idea of traveling to different biomes and capturing scans over a few days to create a cohesive asset pack that can be used to recreate similar environments.South Africa has an absurd number of biomes within relatively easy reach, so our idea was to see what would be involved in creating an asset pack for each of them one dayMicro-biomes of South Africa, and the location where most of our scans were capturedhttps://en.wikipedia.org/wiki/List_of_vegetation_types_of_South_AfricaWe first started this project back in April of 2023, slowly gathering references, scouting locations, and planning the scope of the final collection.Reference photo of a boulder we might have scannedOur working title for the project was Little Falls, which is the name of the suburb where we gathered most of our data. We had some grand ideas of rivers and waterfalls as well, which were soon decided to be too much of a time investment for something we cant publish as an asset on Poly Haven.The small riverbed makes for quite a tranquil sceneThis biome, technically a grassland, is extremely common in the highveld of South Africa. Our initial reason for choosing it as our first biome project was that it was very accessible to us. It covers most of our province and were all very familiar with it.Greg is scouting for scannable rocks amongst the grassy outcropOur primary reference was a small rarely-used public park, home to the waterfalls that the suburb is named from. The path is generally overgrown and runs treacherously alongside the polluted river.Rico is trying not to fall off the narrow path into the riverbank belowMoments before Dario gets his shoe stuck in the mudJandre patiently waits for the rest of the team to catch upThere were plenty of rock faces to scan, though much of it was covered in wild grasses and proved a bit challenging to process. We later also visited other nearby parks and reserves to scan more surfaces that were a bit less overgrown.The cliffside would be great to scan, were it not for all the dense foliage obscuring itThere was a wide variety of vegetation and rock types densely packed in this small area, we had to be careful to create a mockup of the final scene and a wishlist of the assets we would really need before going and capturing everything. We wanted to avoid being too overwhelmed for our first biome project.The riverbank is home to a variety of washed-up trash and debris, brought by heavy rains in the summer monthsOverall the project was a great success!There is always more to scan, but we learned a lot about what is required for scanning things in the wild, such that we were able to travel to Namaqualand in September to capture that biome as well. Weve started processing those scans too now, but thats a story for another blog post Until next time, thanks for your continued support on Patreon, and we look forward to seeing what environments you make with our new assets!Rico, Dario & Jandre, on our first location scout.Browse the assets0 Comments 0 Shares 280 Views
-
BLOG.POLYHAVEN.COMDev Log #21Another two months flies by, heres what weve been working on:NamaqualandWere making great progress with the Namaqualand project and expect to finish it in the next two months. Hopefully, itll be ready around the next dev log!Its amazing how much work there is after scanning a location for just a week Theres not much more to say other than the team has been hard at work grinding out assets, so heres a little sneak peek:Moon ProgressMeanwhile, Dario has started working on our moon scans in earnest. There will be quite a few of them, and theyre all very similar, but we hope this will give you a lot of flexibility and variation in large environments.FabricIts been a while since I last spoke about our fabric scans project, and plans have changed a bit since then.While assembling the Cornell box and pondering what sort of machine wed build to automate the scanning process itself, we were contacted by a company in Germany, Colormass, that specializes in doing this sort of thing.Colormass offered to scan a small set of fabrics for us, more or less at cost. While still quite a bit more than it would have cost us to do ourselves, it would free up a few months of our time to work on other projects, so we decided it was still a good idea.They are working on a set of 30 scans as we speak, but weve seen 4 of them so far:Model RecategorizationIn more general news, weve spent some time rethinking the categories we use for models. What we have now was thought up many years ago, back in the brief days of 3D Model Haven, when we had less than 50 models in total.Now, the library is considerably larger and more varied, and were a bit tired of calling everything a prop or decorative.While were already fairly happy with the new categories weve created, changing categories for everything all at once has a number of technical implications. There are unanswered questions about how specifically we want the new system to workwhether or not it will be the same for textures and HDRIs, too.The website, our Blender add-on, and any third-party tools using our API all need to work seamlessly while we make the transition, taking into account users who may not have the latest version and are still expecting the old categories. Theres still much to decide and determine here, but I thought Id mention it so youre aware of one of the many aspects of running an asset library besides the act of making the assets themselves Blender Asset Drop HandlerSpeaking of our add-on, I made a patch for Blender 8 months ago that adds a small feature that would allow our add-on to be used more intuitively and efficiently.Quick and dirty proof of concept of what could happen if we had asset drop handlers in #b3d's Python API (my patch: https://t.co/JUDP0ls50U). pic.twitter.com/VLaAsHG1Df Greg Zaal (@GregZaal) May 16, 2024The feature is simply a Python hook into the asset browsers drag-and-drop event, which would allow us to attach our own code to this action. For example, we could download the asset after the user drags it into their scene rather than requiring all users to download all assets upfront, as it currently works.Similarly, we could ask the user what resolution theyd like the asset to be or simply follow some default preference. Currently, our asset always comes in at 1K resolution, and you have to go to a menu to change this after adding it to the scene.There are a number of other features we have in mind that rely on this patch, and its been waiting patiently for review for the last year. Finally, there has been some activity, and other developers have been working on similar patches and more general implementations, so its possible we may finally get to see these features in Blender soon Better Astro-HDRIs?Finally, a bit of a teaseOne of our most requested types of HDRIs are night HDRIs. We have a few, and a few more on the way from Namaqualand, but since shooting those weve realized our current lens is a bit soft.Its not bad, the image above being the worst case I could find, but the poor coma coupled with the relatively high level of noise (due to the limit of 15s exposures to avoid star trails) there is definitely some room for improvement.Luckily, Sigma recently released a new lens that blows every other potential astro-HDRI lens out of the water: the 14mm f/1.4 Art.Not only is this lens twice as bright as our current lens, meaning theoretically half as much noise, but its also on another level of sharpness wide open.Nikon Z7 + Laowa 15mm F/2Sony a7R4 + Sigma 14mm F/1.4We visited a local camera store and shot a few tests, albeit during the day, and came to the conclusion that we needed this We have an HDRI trip planned for the end of August, where well put it to the test and hopefully capture the best starry skies anyone has ever seen in an HDRI.Assuming the weather plays nicely with us.Thanks for reading this far! We look forward to sharing more exciting progress in the next Dev Log Until then, feel free to join us on Discord where we are always hanging out, and let us know what you are working on too!0 Comments 0 Shares 264 Views
-
BLOG.POLYHAVEN.COMDev Log #16Growing the TeamThanks to our Blender Add-on, we are able to grow the core team and in April we were joined by Jandre van Heerden!Jandre is working on all sorts of technical tasks and helping us improve and automate our processes. Youve probably seen some of his scans already!This brings our full-time team to 5 people Also joining the team this month were Odin and Pan:Theyre a little less productive when it comes to making 3D assets, but weve decided to keep them on for moral support.Hidden AlleyOur latest community project, the Hidden Alley, was again a great success! Voting has just begun to determine the prize order, while James is continuing work on the scene and asset uploads.Little FallsWere making good progress on the Little Falls project, starting to process some scans and making plans for more trips well need to capture others.The location we chose is a little inaccessible and challenging to capture good scans, so well be visiting a few other parks and hills to get gather some more content from similar biomes.Mountain PinesOur Mountain Pines project is also making great progress and is almost ready to share. Many of the assets are already online, and weve got just a few left to finish up before we can share the scene file.Smugglers Cove BacklogWe have a few environment scans that made it into the Smugglers Cove asset pack on the Unreal Marketplace that we havent yet shared on polyhaven.com, like the one above. This was mainly due to them requiring some more work to be usable in Blender, while in Unreal Nanite could practically render the pure scan.Rico is now finishing up these assets so we can share them with everyone Elinchrom Flash RigOne of the first things that Jandre and I worked on last month was a rig for our new Elinchrom ELB 500 flash.We already use a Godox AR400 which is great in most cases, but we found we needed something brighter with batteries (and temperature restrictions) that could last longer, in order to be able to do scans in broad daylight.The ELB 500 is great for this, though it requires a custom mounting solution to get it usable for mobile cross-polarized photogrammetry.We 3D printed most of this, designed around a simple cage we found locally. The design is not quite refined or generic enough for us to share here, Im expecting to make a few revisions once we use it a bit more in the field.Texture Color CalibrationWeve finally formalized and standardized our diffuse map color calibration workflow using Macbeth charts.This was always a confusing issue for all of us, core team and contractors alike, but weve stuck our heads together and come up with a reliable and simple method to ensure the consistency and accuracy of our future texture scans.UncalibratedCalibratedThe difference it makes can be quite dramatic, and I only wish we could go back in time and improve all of our past work as well.Were off the gridOr solar installation was completed a few weeks ago and its been great having electricity 24/7 again.The size of the system is just enough to keep the office going during overcast days, and even run a heater or two while the sun is out.As luck would have it, now we have to deal with water outages too! This will be the second one in as many weeks, only this time its planned to last for 5 days instead of only one. Depending on how things go in the future months, we may need to opt for a borehole as well, but well cross that bridge when we come to it.0 Comments 0 Shares 294 Views
-
BLOG.POLYHAVEN.COMDev Log #17Into NamaqualandEarly next month, almost the whole team will be off on our first real excursion!The location of choice, meant to be a fairly easy first trip, is the desert.Specifically, the Namaqualand area of the Northern Cape in South Africa the same location this HDRI was taken.We figured after our struggles with lush vegetation making difficult access to scannable objects and surfaces for the Little Falls project (more on that below), a desert would be a nice simple environment to learn from that would also give us lots of content.After planning the trip in more detail, however, things are a little more complicated The accommodation were staying in, the Goegap Nature Reserve, has no electricity or cell service, and much of the park is only accessible with a 44.Could we have chosen a better place to stay? Probably. But its too late now since all accommodation in the area is completely booked up. You see were visiting the Namaqualand area in precisely the most popular time of year early Spring when flowers may bloom among the desert landscape depending on your luck with the rains in the days prior.This was intentional of course, and the reason why we have to go now even while the Little Falls project is still unfinished. The chance of seeing something more than just dry sand and rocks only comes once a year, and we think it could add a lot of value to an asset pack like this.Besides, who doesnt like a bit of a challenge?Well be charging all our cameras and flashes with solar power, or using our cars as generators if need be. For communication, good old-fashioned radios will do.We expect this to be quite the little adventure, so well be making a behind-the-scenes video exclusively for our Patrons if youre interested in following along Little Falls ProgressThe Little Falls project has taken a bit of a back seat while we prepare for the desert and wrap up older projects, but I can share a few asset WIPs in the meantime:Hidden Alley SceneJames is making steady progress on the Hidden Alley scene using everyones assets.Much of the time that it takes to put together a scene like this stems from the idea that Poly Haven should be publishing assets, not wasting time making pretty pictures So were trying to turn as much of the scene as possible into useful assets rather than one-time props that look OK at a distance just for the sake of a render.Elinchrom Flash RigAlso known as Wall-E, weve reworked our scanning rig for the Elinchrom ELB 500 flash. The earlier prototype we made worked really well, so this is merely a structural upgrade to ensure it can survive the desert and be relied upon.Naturally, like our AR400 flash rig, this one was first designed in Blender:Well be sharing the files for this soon in a separate post once weve tested it and ironed out any issues.C-MonsterSpeaking of scanning rigs, weve made a large 3.2-meter C-frame with a rail for our Godox AR400 rig to slide along:While almost all our scanning work is done hand-held (or with a tripod), we have some plans to visit a wood shop and scan a dozen or so boards and veneers. Doing that in a reasonable time frame with minimal back pain requires some level of automation.This rig can be operated manually with string as pictured above, but we also intend to attach a stepper motor and a 3D printed rack and pinion to automate the one axis.Turntable WIPJandre has been working on an automated turntable for scanning small objects like rocks and food.While off-the-shelf turntables do exist, we decided to brew our own (open source) compact solution that can be adapted to other rigs (like the C-monster) and programmed to do anything we might need.The Scanning in the Void method is our bread and butter here. With a mini tripod to hold the turntable, this can all be done in the back of a car on location. This helps avoid the environmental impact of transporting scan specimens away from their original location to our home base.BlenderKit IntegrationOur HDRIs have been available on BlenderKit for a while now, but only in the last month or so have we also added our textures and 3D models to their library.This is achieved by a script, made originally by Vilem (one of the BlenderKit admins) and rewritten by Jandre.We still think our own Blender add-on is the most user-friendly way to get direct access to our assets in Blender, but if youre already a BlenderKit subscriber this is a good alternative.Photography WorkshopA huge portion of our work comes down to what I would call technical photography, where its important to understand how cameras and light work in order to capture the highest quality images for photoscanning.We spared some time one afternoon to drive around Joburg and share what we know, having a bit of fun experimenting with what our gear is capable of even outside of the usual scan settings.0 Comments 0 Shares 310 Views
-
BLOG.POLYHAVEN.COMAI and Poly HavenThings are a bit crazy at the moment so the usual bi-monthly Dev Log will be a bit late.In the meantime, I thought it might be time we talk about our stance on AI in general. In the 3D and related art industries, there has been a great deal of concern about the future of peoples careers and the ethics of current AI tools.As a publisher, especially of CC0 content, we obviously have some thoughts about all this.The obviously bad stuffThe rest of this post might be somewhat controversial to some, so lets be clear about what we think makes AI sometimes bad.Copyright law has yet to catch up with generative AI, so in many places, its a bit unclear what is and isnt legally allowed. Hence we need to lean more on morals and ethics than the black and white law itself.We believe that if you do not expressly give permission in some way for others to use your work for any purpose, or in a database, or specifically for AI training, then its not OK to train an AI with it.Some freelancers are receiving less work because of AI, and that sucks.Some people have been laid off because their employer chose to replace them with an AI, and that sucks.Some people are earning less, because their work has been devalued by the accessibility of AI, and that sucks.As time goes on, more and more artists will likely be affected like this and be forced to adapt or suffer.We wont make assets with AIOne thing thats always been clear about publishing content on the internet is that there are volumes and volumes of trash, particularly when it comes to free content.Long before AI-generated assets were a twinkle in anyones eyes, we made the decision to focus on quality over quantity. Its why we spend weeks on every photoscanned texture instead of pumping out dozens of procedural textures like most other publishers.With the introduction of AI-generated textures, HDRIs, and even 3D models, we predict a further increase of trash content on the internet, and we have no intention of joining this trend just to make some numbers go up.However, in the long term, we believe these tools will improve and the quality of their outputs will meet or exceed the assets we make. Even still, we will not be using generative AI tools exclusively to create assets en masse.Thats not to say we outlaw any kind of AI tool entirely we already do use some AI tools to make some of our assets sometimes. For example, ArtEngine (RIP) has some handy tools to help make our photoscanned textures seamless. They often dont work well, but when they do they save a lot of time that would have otherwise been spent painting and clone stamping manually.The way we see it, as an artist you probably dont actually care how the assets you need are made. Whether the fire extinguisher you want for your project is modeled in Blender and textured in Substance, photoscanned from real life, or generated with some AI, you dont really care. As long as you can get a nice fire extinguisher, under a permissive license that suits your needs, you probably dont care how exactly it was made.But we do.But not because we hate AIWe have nothing against generative AI, in fact, we think its pretty cool. We just recognize that these tools rely on a good foundation of data. The more real data (as opposed to generated data) they have access to, the better their generated results will be, meaning potentially the better your game or VFX shot will be.Poly Haven has always been about lowering the barrier of entry for people to create higher quality 3D art by providing tools and content to everyone for free equally. In many ways, generative AI has similar potential to level the playing field.Our choice not to make generated content is not based on some legal or moral stance, but out of a choice to be part of the source data used for training long term.We want Poly Haven to stick to its core values, being a source of high-quality 3D content based on photographic data and real life, available to everyone as freely as possible.But at what cost?The reality of the progression of technology is that some jobs will be made less valuable as automation improves, even to the point of becoming obsolete.Many people make the counterargument that your job wont be replaced by AI, just that your job will be made easier and more efficient.Thats true in some cases, but that also means employers and clients may want to pay less for your work, or let you go because they dont need as many people for that role now.This issue has been, and still is being, debated to death on the internet (and since the Industrial Revolution of course), so thats not what I want to talk about here. Instead, I want to focus on what it means for Poly Haven.Realistically, were working ourselves out of a job to some extent. Were providing the ethically sourced no-strings-attached training data to tools that may ultimately replace us. Why go to some website to dig through hundreds of wood materials, when you can stay in your texturing software and ask it for exactly what you want?Sure, there is some value in a library of any kind (regardless of how the content is created) that you can browse through when you dont really know what you want yet, but this can be implemented alongside generative tools too.The reality is, we think, that the future of small asset libraries like Poly Haven might not look too good. Whether its generative tools removing the need for 3rd party asset libraries entirely, or just massive one-stop shops like FAB promises to be, taking over any hope of competition, we feel it might be time to start adapting.What were doing about itFirst of all, Poly Haven the website is not going anywhere. Its relatively cheap to host the website itself, so theres no danger of all our assets vanishing because AI took our jobs.Its also not fundamentally changing. Poly Haven will still be a 3D asset library of CC0 content, and were still OK with people using our assets to train their AIs. Were actually kind of proud that we can be one of the few platforms that allow this unconditionally and without questioning laws or ethics.What were talking about is Poly Haven the team: The people who make the assets.Whats the future for us?We intend to use some of our excess resources to branch out a little and work on things that are related to asset creation, but not strictly just for those purposes. Something that we can use to help us make good assets, but with other goals and benefits as well.In other words, we plan to make video games.Weve had this idea for some time, but originally more as a dogfooding idea than anything else. We wanted to make bigger content using our assets, not just static scenes and simple animations, to help guide us to make more usable assets. The Blender Studio does something similar, making short films to help guide what features are developed for Blender and how theyre implemented. We could do the same, but for us, personally, short films are perhaps not as interesting as video games. We also think video games are maybe a little safer from being replaced by AI than films or other pre-rendered media since they have several more layers of complexity.A modest game or two as a testing ground for our environment assets was the original idea. Now that we see the future of asset creation as a career in a bit of jeopardy, it seems this game development idea can serve another purpose as well: Diversifying our skills and income in the long term.We obviously dont think we can make an amazing AAA game right off the bat, but we also dont have to. The future is not here yet, so we have some time to work on this idea slowly and make sure it aligns with what Poly Haven is today, while at the same time setting us up for what our team might need to transition to tomorrow.Were not becoming a game studio. We still make assets. Were just trying to do that better, have some fun, and secure our future at the same time.As always, I have to express our undying thanks to our Patrons who support us and enable us to do what we feel is right and good for everyone, instead of chasing profits like most other corporate publishers. We couldnt do this without you <3PS: Images in this post were generated with various AIs. Sorry The satire was too tempting.0 Comments 0 Shares 311 Views
-
BLOG.POLYHAVEN.COMDev Log #20Its been too long since the last dev log, so without further ado here we go!Moon updateLast October we went to the moon lab in Rostock Germany, so whats happening with that?Were still working on the Namaqualand project first (more on that below), but our friends from Rostock asked us for something they could present at a conference recently, so we processed two of the scans and made a little demo scene to show their potential:Were not completely finished with the textures just yet, but its a good indication of whats to come later this year Namaqualand progressOur trip to the desert back in September last year is finally starting to show some results! Weve completed a number of scans and worked out a few kinks with the process. The plan is to finish everything around July, but dont hold us to that just yet ReferenceFinished modelVerdant Trail is outWe wrote a dedicated post about this when it was published in February if you want more details about the project as a whole, otherwise, here are some pretty pictures:Dimensions, texel density & polycountThis is another one of those nice to have things thats been on my list for years, but I decided to tackle it recently: Displaying the dimensions and polycount of models on the website, as well as the texel density for both models and textures:This helps you get an idea of the resolution of the asset before downloading it, or perhaps meeting some requirements in your project.The largest dimension is shown, with the words tall or wide depending on the axis. Hovering over the value will show all dimensions precisely.Texel density is shown in pixels per centimeter, but if you hover over the value it will also show a precise pixels per meter value as well if thats more your style.Polycount is just triangles before any subdivisions.Modular buildingshttps://polyhaven.com/models/structures/buildingsJames started working on a set of modular urban buildings during the Hidden Alley community project, but decided they needed a bit more attention after the alley was completed rather than trying to rush them in for the release of that collection, or make our awesome community wait even longer for their assets to be published.This is one of our first forays into modular environment assets, so wed love some feedback on their usability and design if you have any Plans to support geometry nodesOne of our long-standing issues, and complaints we get fairly regularly, is that our tree assets are not very compatible with non-Blender software.One of the issues comes from the fact that our FBX exports are fully automated, and dont work well for geometry nodes necessarily, and in this case results in the leaves going missing. We have an FAQ item about them and everything.But its not just limited to FBX, some Blender users just want a static tree mesh that doesnt have any complex controls and performance implications. This is particularly the case in Blenders asset browser: often users dont want any kind of baggage attached to the asset, they expect a single simple tree to appear when they drag it in.To address these problems, weve come up with the following solution:Models that use geometry nodes will have two variants inside their Blend file:The original generator that is used to create and customize the model.A static version that is baked down with all modifiers applied, ready to export to FBX, or simply used in Blender.Both these versions will be marked as assets in our add-on so that they appear in the asset browser despite coming from the same Blend file.Another example is grass: Sometimes you just want one small tuft of grass to place manually and accent your scene, but sometimes you expect to drag the asset onto a ground plane and automatically scatter the grass instances across the mesh. Both of these could be accomplished by simply organizing the file well and marking the appropriate datablock as an asset.This needs to be handled on a case-by-case basis, so itll take some time for us to investigate each of them, come up with some consistent standards, and update our assets to comply.API licensingThe Poly Haven API is now generally available and free to use for non-commercial and academic purposes.The API is simply a way for developers to access our assets with code, downloading what they need in the formats and variants they require, and also includes all the metadata like tags, categories, and polycounts. This is used for example in our Blender add-on to download assets directly in Blender as needed.Previously, we made the API public but did not formally specify how anyone was allowed to use it. This led to some great tools like a Houdini plugin and some AI model training, however, it also led to what we consider abuse of our CC0 assets for selfish gain users essentially cloning our website and placing ads everywhere, hijacking our old names (and even Patreon accounts) to build SEO reputation.We were also contacted by a number of studios and corporations that were interested in using the API but wanted some kind of guarantee of reliability and consistency.For these reasons, we decided to add a formal Terms of Service for the API and define what were comfortable with and what developers can expect from us.Anyone wanting to use the API for commercial purposes will need to contact us to request permission (and a quote) to ensure we can cater to them sustainably. We still want the open-source community, students, and researchers to have unrestricted access, so the API is completely free to use for those purposes.If all of this sounds confusing to you and youre not sure what the API even is, dont worry about it All our assets are still CC0 and free for everyone, nothing has changed there.Our stance on AI generated assetsIn case you missed it, I wrote a lengthly post about what we think of generative AI in the 3D asset industry, what that means for us, and what were doing about it.That about sums up the last few months! Theres a little bit of work here and there that we arent ready (or allowed) to talk about just yet, but youll hear about some of it in the next Dev Log Thanks for reading!0 Comments 0 Shares 312 Views
-
BLOG.POLYHAVEN.COMDealing with motion when shooting HDRIsThis post continues from our main HDRI workflow article:https://blog.polyhaven.com/how-to-create-high-quality-hdriSome of my favorite HDRIs were shot in busy streets with crowds of people around, or with dramatic fast-moving clouds and rapid sunsets that might seem impossible to capture to those unfamiliar with the magic of masking.Contrary to what you might initially think, these things are in fact fairly trivial to deal with as long as youre aware of them and shoot carefully.In a nutshell, you have four tools at your disposal:Timing your shots and planning for maskingShooting in order of priorityControlling control pointsManual layer blendingPlan to MaskWhen shooting in a busy street, tourist attraction, or any other location where there are people/cars/cats outside of your control, the easiest way to get rid of these elements is to shoot the same angle multiple times with the intention of masking out anything you dont want.Each of these images has something the other does not.If you can, take your time and be patient. Chances are those pesky tourists are going to move away after a while. Heck, maybe you could even ask them to step aside for a few seconds while you get that shot. In a city street example, that traffic light is going to change sometime and you might get a lucky clean shot. But even if you cant get a single clean shot, you can get a couple shots with fewer cars in them. With enough shots, youll likely be able to capture every part of your image cleanly, even if no single image is empty.Worst case scenario, you may have some small areas that were never empty, but those can easily be filled in with inpainting.Prioritize the SkyFor most HDRIs, the sky is whats most important its the part thats generally most visible (not covered up by foreground CG elements) and emits the most light.By the time you finish shooting the panorama, the light might be quite different than when you started, for example at sunset.Maybe the clouds light up bright pink for only a few seconds, and you dont want to miss that opportunity.Start by shooting your upward rotation to capture the sky, and then work your way down. If the sunset happens to get even prettier while you are shooting, you can always stop and start again from the top.By the time you finish, the sun may have completely set and the ground could now be darker than before. This is mostly unavoidable, but its better if the ground in your final HDRI is slightly inaccurate (which most people wont be able to tell) than missing the glorious sunset entirely.In the same example as shown above, you can actually see the sunlight is quite different between these photos, since I had to wait a few minutes for the parked car to move. But can you tell in the final pano?Sun still above the horizonSun just below the horizonDelete Moving Control PointsIn case its not obvious, having control points on moving objects will likely confuse PTGui and introduce visible seams or even massive alignment issues.If you notice seams in your panorama, the first place to look is your control point table, and sort by distance value. You probably have some control points with high distances (i.e. PTGui sees a disparity between the control point location and its expected location if the object were static).The three control points in the clouds have massive distance/error values, since the clouds moved a lot between these shots.Often simply deleting control points that appear on moving objects is enough to resolve the issue.In some cases though, such as for moving clouds, it may not actually be bad to have control points between them, it may be your only option, as long as you dont also have control points between static objects in the same images as well. Its the relative difference that causes the seams.Naturally, this might mean you have some high distance-value control points in your table, but as long as you know why theyre there and how youre going to deal with them thats totally fine.Having control points on moving clouds can help stitch those more seamlessly, just be careful to avoid introducing seams on static things because of that. As usual, masking is your friend, and you can simply mask out everything except the sky for your upward-facing shots.Manual Layer BlendingIn some cases, such as clouds flying by overhead in high winds, you may not be able to avoid some stitching artifacts. These could be obvious seams or strange patches of inconsistent contrast. In one photo, the hillside might be in the sun, but in the next, it might be in the shade of a cloud.Recent versions of PTGui (v12+) have some different ways of blending images together, and an optimum seam finder:In my experience, the default (Zero-overlap with optimum seams) is usually the best overall, but can sometimes cause dark and bright spots, especially near the zenith or when you have many small masked areas such as lens flares, birds, or moving people.One of the common artifacts with zero-overlap blending, a dark patchIn those cases, the other blending mode, multiband (and without optimum seams), might do better. So I find I often export both a zero-overlap version and a multiband version, then overlay them in Affinity Photo and paint a mask to get the best of both and avoid their artifacts.Multiband fixes that particular issue, though it also subtly darkens the sky behind the cloudsHowever, if you still have problems caused by significant motion, you may even need to blend the images yourself by hand.To do this, after exporting the HDRI normally, simply check the Individual HDR Layers output box to save a separate pano for each image.You will probably also want to enable only the images you think youll need so that only panos for those images are saved:After doing this, youll end up with a set of images like this:Now you can open the original HDRI in your editor of choice and drag the new layers on top of it.From here its just a matter of manually creating masks for each image one by one until youve fixed all your problems.HDR GhostingGhosting is what happens when there is motion within the same HDR bracket set.One major feature of most professional HDR merging software is the removal of these ghosts. My little script doesnt have this feature, but most of the time youll be masking out moving things completely anyway so it doesnt matter what they look like.In some rare cases, however, you might actually want to keep some moving objects in the HDRI. For example, this kitten:Ghosting artifacts caused by moving kittyFixed ghosting by choosing one of the brackets to take priority at the cost of more noiseTo get this result, I had to manually modify the masks for the compositing nodes in the blend file that my script created so that it used one of the darker frames (hence all the noise) for the whole cat.If anyone knows of a good HDR merging tool that can output linear unbiased 32-bit images that also handles ghosting, please do let me know!If you have any other struggles with motion when stitching HDRIs, let me know in the comments below and I can try to help you find a good solution0 Comments 0 Shares 305 Views
-
BLOG.POLYHAVEN.COMPolarized Photogrammetry Rig for Elinchrom ELB 500 Dual FlashOur Godox AR400 rig works great for a scan or two, but quickly becomes a pain to work with when sunlight has to be contended with. The flash is simply not bright enough to completely overcome the midday sun, and if you shoot at relatively high power levels (at or above 1/4th), youll quickly hit hard-coded temperature limitations after a few hundred (or dozen) shots and need to repeatedly reset the flash to keep shooting.So, before our trip to Namaqualand, we started looking for alternatives that would enable us to shoot in direct sunlight for extended periods of time and perhaps provide a better ergonomic experience.On the 3D Scanning discord, we stumbled upon some images of other artists using the Elinchrom ELB 500 dual flash kit (along with a custom handheld rig) and decided to make our ownDownload Blend fileLicense: CC0The rig is primarily made from off-the-shelf components such as aluminum extrusions, a camera L-bracket, and a wireless remote. Then, we designed 3D-printed handles, a small spacer, and feet to tie it all together.Of course, we also needed to polarize the flashes, so a simple 3D-printed pair of rings clamp the polarizing filters against the front of each hood/reflector. The filters themselves, like for the AR400 rig, came from replacement filters for an iPad screen which are just the right size and easy to obtain.In a nutshell, the rig is simply a way to attach the flashes to the camera and provide a somewhat ergonomic way to hold everything. However, after struggling a bit with the AR400 rig, we also wanted to give ourselves some creature comforts to make it a little more versatile and enjoyable to use, such as:TPU/rubber feet, so it can stand easily on a table/floor without scratching anything.Top handle: to help hold it low to the ground. The handle we have came from a video cage kit, so it also has convenient 1/4 threads to attach a monopod and hold even more comfortably lower to the ground.A 1/4 tapped hole on the bottom for attaching to a tripod or monopod for easier maneuverability and height access.A removable wireless remote integrated into the right handle, lined up just where your index finger goes.An overall footprint that fits inside the Elinchrom kits bag, protecting those big polarizers during transport.Its certainly an upgrade over the AR400 rig, but usability-wise there are still some annoyances:The battery life is not that great when shooting at full power. Its better than the AR400 but still surprisingly short, given the batteries physical size.The separate flash body, which houses the controls, battery, and electronics, is tethered by thick cables to the flashes themselves and is a bit awkward despite its shoulder sling. A small hip-mounted bag might be a nice improvement.The modeling lights (LEDs you can turn on inside the flash heads to help with focusing and polarizer alignment) are not very bright and turn off automatically after only a few seconds making the initial setup a bit frustrating.The aux connector on the body, which connects to the camera to trigger the flash, is unreliable and poorly positioned to be damaged easilyhence the looseness and unreliability now.If anyone has some ideas or suggestions for improvements, please comment below If youd like to build one yourself, the .blend file is available for download at the top of this post. Parts and assembly should be fairly self-explanatory from the object names, though we worked with what we had on hand and could find locally.0 Comments 0 Shares 306 Views
-
-
More Stories