• WWW.TECHRADAR.COM
    This curling wand looks like a drill and I'm not sure if it's dumb or genius
    MDLondon has reinvented the curling wand, and if it actually works it could be a game-changer.
    0 Комментарии 0 Поделились 41 Просмотры
  • 0 Комментарии 0 Поделились 66 Просмотры
  • WWW.MARKTECHPOST.COM
    A Coding Implementation for Advanced Multi-Head Latent Attention and Fine-Grained Expert Segmentation
    In this tutorial, we explore a novel deep learning approach that combines multi-head latent attention with fine-grained expert segmentation. By harnessing the power of latent attention, the model learns a set of refined expert features that capture high-level context and spatial details, ultimately enabling precise per-pixel segmentation. Throughout this implementation, we will walk you through an end-to-end implementation using PyTorch on Google Colab, demonstrating the key building blocks, from a simple convolutional encoder to the attention mechanisms that aggregate critical features for segmentation. This hands-on guide is designed to help you understand and experiment with advanced segmentation techniques using synthetic data as a starting point. import torch import torch.nn as nn import torch.nn.functional as F import matplotlib.pyplot as plt import numpy as np torch.manual_seed(42) We import essential libraries such as PyTorch for deep learning, numpy for numerical computations, and matplotlib for visualization, setting up a robust environment for building neural networks. Aldo, torch.manual_seed(42) ensures reproducible results by fixing the random seed for all torch-based random number generators. class SimpleEncoder(nn.Module): """ A basic CNN encoder that extracts feature maps from an input image. Two convolutional layers with ReLU activations and max-pooling are used to reduce spatial dimensions. """ def __init__(self, in_channels=3, feature_dim=64): super().__init__() self.conv1 = nn.Conv2d(in_channels, 32, kernel_size=3, padding=1) self.conv2 = nn.Conv2d(32, feature_dim, kernel_size=3, padding=1) self.pool = nn.MaxPool2d(2, 2) def forward(self, x): x = F.relu(self.conv1(x)) x = self.pool(x) x = F.relu(self.conv2(x)) x = self.pool(x) return x The SimpleEncoder class implements a basic convolutional neural network that extracts feature maps from an input image. It employs two convolutional layers combined with ReLU activations and max-pooling to progressively reduce the spatial dimensions, thus simplifying the image representation for subsequent processing. class LatentAttention(nn.Module): """ This module learns a set of latent vectors (the experts) and refines them using multi-head attention on the input features. Input: x: A flattened feature tensor of shape [B, N, feature_dim], where N is the number of spatial tokens. Output: latent_output: The refined latent expert representations of shape [B, num_latents, latent_dim]. """ def __init__(self, feature_dim, latent_dim, num_latents, num_heads): super().__init__() self.num_latents = num_latents self.latent_dim = latent_dim self.latents = nn.Parameter(torch.randn(num_latents, latent_dim)) self.key_proj = nn.Linear(feature_dim, latent_dim) self.value_proj = nn.Linear(feature_dim, latent_dim) self.query_proj = nn.Linear(latent_dim, latent_dim) self.attention = nn.MultiheadAttention(embed_dim=latent_dim, num_heads=num_heads, batch_first=True) def forward(self, x): B, N, _ = x.shape keys = self.key_proj(x) values = self.value_proj(x) queries = self.latents.unsqueeze(0).expand(B, -1, -1) queries = self.query_proj(queries) latent_output, _ = self.attention(query=queries, key=keys, value=values) return latent_output The LatentAttention module implements a latent attention mechanism where a fixed set of latent expert vectors is refined via multi-head attention using projected input features as keys and values. In the forward pass, these latent vectors (queries) attend to the transformed input, resulting in refined expert representations that capture the underlying feature dependencies. class ExpertSegmentation(nn.Module): """ For fine-grained segmentation, each pixel (or patch) feature first projects into the latent space. Then, it attends over the latent experts (the output of the LatentAttention module) to obtain a refined representation. Finally, a segmentation head projects the attended features to per-pixel class logits. Input: x: Flattened pixel features from the encoder [B, N, feature_dim] latent_experts: Latent representations from the attention module [B, num_latents, latent_dim] Output: logits: Segmentation logits [B, N, num_classes] """ def __init__(self, feature_dim, latent_dim, num_heads, num_classes): super().__init__() self.pixel_proj = nn.Linear(feature_dim, latent_dim) self.attention = nn.MultiheadAttention(embed_dim=latent_dim, num_heads=num_heads, batch_first=True) self.segmentation_head = nn.Linear(latent_dim, num_classes) def forward(self, x, latent_experts): queries = self.pixel_proj(x) attn_output, _ = self.attention(query=queries, key=latent_experts, value=latent_experts) logits = self.segmentation_head(attn_output) return logits The ExpertSegmentation module refines pixel-level features for segmentation by first projecting them into the latent space and then applying multi-head attention using the latent expert representations. Finally, it maps these refined features through a segmentation head to generate per-pixel class logits. class SegmentationModel(nn.Module): """ The final model that ties together the encoder, latent attention module, and the expert segmentation head into one end-to-end trainable architecture. """ def __init__(self, in_channels=3, feature_dim=64, latent_dim=64, num_latents=16, num_heads=4, num_classes=2): super().__init__() self.encoder = SimpleEncoder(in_channels, feature_dim) self.latent_attn = LatentAttention(feature_dim=feature_dim, latent_dim=latent_dim, num_latents=num_latents, num_heads=num_heads) self.expert_seg = ExpertSegmentation(feature_dim=feature_dim, latent_dim=latent_dim, num_heads=num_heads, num_classes=num_classes) def forward(self, x): features = self.encoder(x) B, F, H, W = features.shape features_flat = features.view(B, F, H * W).permute(0, 2, 1) latent_experts = self.latent_attn(features_flat) logits_flat = self.expert_seg(features_flat, latent_experts) logits = logits_flat.permute(0, 2, 1).view(B, -1, H, W) return logits The SegmentationModel class integrates the CNN encoder, the latent attention module, and the expert segmentation head into a unified, end-to-end trainable network. During the forward pass, the model encodes the input image into feature maps, flattens and transforms these features for latent attention processing, and finally uses expert segmentation to produce per-pixel class logits. model = SegmentationModel() x_dummy = torch.randn(2, 3, 128, 128) output = model(x_dummy) print("Output shape:", output.shape) We instantiate the segmentation model and pass a dummy batch of two 128×128 RGB images through it. The printed output shape confirms that the model processes the input correctly and produces segmentation maps with the expected dimensions. def generate_synthetic_data(batch_size, channels, height, width, num_classes): """ Generates a batch of synthetic images and corresponding segmentation targets. The segmentation targets have lower resolution reflecting the encoder’s output size. """ x = torch.randn(batch_size, channels, height, width) target_h, target_w = height // 4, width // 4 y = torch.randint(0, num_classes, (batch_size, target_h, target_w)) return x, y batch_size = 4 channels = 3 height = 128 width = 128 num_classes = 2 model = SegmentationModel(in_channels=channels, num_classes=num_classes) criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) num_iterations = 100 model.train() for iteration in range(num_iterations): x_batch, y_batch = generate_synthetic_data(batch_size, channels, height, width, num_classes) optimizer.zero_grad() logits = model(x_batch) # logits shape: [B, num_classes, H/4, W/4] loss = criterion(logits, y_batch) loss.backward() optimizer.step() if iteration % 10 == 0: print(f"Iteration {iteration}: Loss = {loss.item():.4f}") We define a synthetic data generator that produces random images and corresponding low-resolution segmentation targets to match the encoder’s output resolution. Then, we set up and train the segmentation model for 100 iterations using cross-entropy loss and the Adam optimizer. Loss values are printed every 10 iterations to monitor training progress. model.eval() x_vis, y_vis = generate_synthetic_data(1, channels, height, width, num_classes) with torch.no_grad(): logits_vis = model(x_vis) pred = torch.argmax(logits_vis, dim=1) # shape: [1, H/4, W/4] img_np = x_vis[0].permute(1, 2, 0).numpy() gt_np = y_vis[0].numpy() pred_np = pred[0].numpy() fig, axs = plt.subplots(1, 3, figsize=(12, 4)) axs[0].imshow((img_np - img_np.min()) / (img_np.max()-img_np.min())) axs[0].set_title("Input Image") axs[1].imshow(gt_np, cmap='jet') axs[1].set_title("Ground Truth") axs[2].imshow(pred_np, cmap='jet') axs[2].set_title("Predicted Segmentation") for ax in axs: ax.axis('off') plt.tight_layout() plt.show() In evaluation mode, we generate a synthetic sample, compute the model’s segmentation prediction using torch.no_grad(), and then convert the tensors into numpy arrays. Finally, it visualizes the input image, ground truth, and predicted segmentation maps side by side using matplotlib. In conclusion, we provided an in-depth look at implementing multi-head latent attention alongside fine-grained expert segmentation, showcasing how these components can work together to improve segmentation performance. Starting from constructing a basic CNN encoder, we moved through the integration of latent attention mechanisms and demonstrated their role in refining feature representations for pixel-level classification. We encourage you to build upon this foundation, test the model on real-world datasets, and further explore the potential of attention-based approaches in deep learning for segmentation tasks. Here is the Colab Notebook. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 85k+ ML SubReddit. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/A Coding Implementation on Introduction to Weight Quantization: Key Aspect in Enhancing Efficiency in Deep Learning and LLMsAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Allen Institute for AI (Ai2) Launches OLMoTrace: Real-Time Tracing of LLM Outputs Back to Training DataAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Can LLMs Debug Like Humans? Microsoft Introduces Debug-Gym for AI Coding Agents
    0 Комментарии 0 Поделились 53 Просмотры
  • WWW.IGN.COM
    The Last of Us: Season 2 Premiere Review – “Future Days”
    The following contains full spoilers for The Last of Us season 2, episode 1, “Future Days.”The season 2 premiere of HBO’s The Last of Us is all about introductions and reintroductions – those friendly and not-so-friendly faces aiming to both enrich and destroy Ellie’s world. Over the course of the five years that have passed since Joel pulled her out of the Fireflies lab, she’s found some stability. But over the course of “Future Days,” we see new relationships flourish and existing ones strain. It’s a strong opening chapter that steadily turns the crank on the rollercoaster we’re about to join Ellie on.Season 1 centered on the burgeoning relationship between Ellie and Joel, so it's slightly jarring to see them barely spend a scene together here. The bond that seemed so solid up until last season’s final moments appears fractured. When they’re together in “Future Days,” they tend to trade angry outbursts and silent, dismissive looks.PlayPascal continues to impress as Joel, playing a softer version of the smuggler than the one Troy Baker played in the games. As shown in glimpses of his life in Jackson, Joel is more pensive and settled back into civilian life – and the construction work and family structure he once had are seemingly revived. His hair has started to grey, and he’s more self-reflective than we’ve seen him before: undergoing therapy and dealing with raising a daughter into womanhood for the first time (seeing as that experience was ripped away from him previously). It’s a nuanced performance from Pascal: In the span of a single scene, you can chart sadness, frustration, and that ever-present sense of self-preservation across his face.In turn, there’s a newfound physicality in Bella Ramsey’s portrayal of Ellie. A barn brawl showcasing the training she’s undergone bridges the events of the first and second seasons. It’s not-too-subtle foreshadowing for the episode’s standout scene when Ellie finds herself face-to-face with a Stalker – a new breed of intelligent infected who cleverly hunt their prey. They were easily my least favourite enemy to run into in The Last of Us Part 2, and that horror is well translated here. Chills are delivered effectively in the background as the Stalker crawls on distant shelves. The near-silence proves even scarier than the clicks of its brethren – its murmured cries are a sad reminder of the humanity lost inside. A crown of fungus adds to that terror, conveying an almost folk-horror feel. It's a shift in their visual design from Part 2, but a welcome one that I enjoy greatly.Showrunners Craig Mazin and Neil Druckmann promised more appearances from the infected in season 2, and that’s apparent from the get-go. Although the Clicker encounter may be less scary than the Boston museum scene of season 1, it still has a chilling effect. It’s clear from the way Ellie and Dina handle the situation that this isn’t their first rodeo – there’s a hint of routine in the way they clear out the store. It has an almost playful feel to it – like two kids breaking into school after dark – and the bottle-throwing distraction is a lovely nod to its stealth-action video game roots.The Last of Us Season 2 Cast: Who's New and Coming Back to the HBO Show?Mazin’s direction of this scene – and all of “Future Days” – eases us back into the darkness of The Last of Us. As much as things change, things stay the same, with Joel's devotion to Ellie seemingly remaining paramount to him despite their relative newfound safety in the community of Jackson. This is best displayed in Joel’s scene with the town’s therapist, performed by the fantastic as-ever Catherine O’Hara. She’s warm but with an underlying threat and vulnerability waiting to jump out.Joel’s willingness to evolve and move on is displayed by the fact that all he wants to talk about is his relationship with Ellie. She is his world now, and nothing else matters, whether it be the infected, fireflies, a cure, or Sarah. What used to be primary issues for him have faded into the background, making the tension between Joel and Ellie even more stark. Even if it’s just glorified emotive exposition, this behind-closed-doors conversation displays the shared grief of everyone in this world wonderfully. Each of these characters has lost something, and Ellie is trying to work out how to communicate with Joel. It points toward this story’s message that there is no correct way to deal with emotions as strong as grief or hatred, with each person having to work it out for themselves. The callback to Joel dealing drugs in the series premiere is a nice touch, too, except this time he’s in search of emotional well-being as opposed to the ration cards.I’ll admit, I was fairly surprised to see the revelation of Abby’s motives so early.“Of course, there’s a spectre hovering above all of this, and she’s played by Kaitlyn Dever. I’ll admit, I was fairly surprised to see the revelation of Abby’s motives so early. We might not have the whole picture yet, but we do know that she’s seeking revenge for Joel’s Firefly massacre. Still, I can’t say I’m a huge fan of this change from the game. I much prefer Abby to be a character shrouded in mystery with her motives remaining unknown for as long as possible – this is what makes the halfway point of the game hit like a hammer. It’s a shame to see this moment already lost here, but I do understand the decision on some level. A TV show doesn’t have the luxury of steadily introducing player agency and its repercussions. I do enjoy Dever’s version of Abby from the little we see here, though. It’s the emergence of a different kind of monster – her bubbling ferocity isn’t as physically signposted as in the game, but she’s fearsome nonetheless. When we see her standing in the snow over Jackson in the premiere’s final moments, this looming threat, combined with the reveal of tendrils growing in exposed pipes, reminds us that there truly is no safe place in this world anymore – to great effect.PlayAbby’s reveal is the first early sign of the timeline being shifted for those familiar with the events of The Last of Us Part 2. The barn dance sequence is an obvious one later too, and although it's beautifully shot and charmingly performed, I can’t help but feel like it’s a moment that would have hit harder had it been a flashback revealed later on, as it is in Part 2. A prime example of this is Dina’s delivery of the line “I think they should be terrified of you”. In the game, this line is delivered right at the very end, and within that context it holds a lot more weight. In this remixed chronology, it merely reads as an omen, which in itself is interesting, but far less effective. It’s just one of a handful of changes from the source material that I feel have been made for no great reason, and result in a weaker emotional response.Dina is given further time to shine when out in the wilds with Ellie. I greatly enjoyed watching the warmness gently burning between the patrol partners, who share some of the same cheeky rebelliousness. She’s a great foil to Ellie, who despite a newly hardened exterior, retains her precocious spirit. It further builds into the feeling that they’re the town’s rebels. Despite the democratic structure in place, they find ways to break the rules and have fun, even when faced with infected-infested buildings they shouldn’t be stumbling into. This feeling is later fortified when they find themselves in front of the town’s council, and they sink into their chairs like naughty children. I like the youthful feel of their scenes here, but I do feel apprehensive about how this dynamic will come across later, knowing the mature content coming up in this story.For all of The Last of Us’ grand themes, it’s also a show that relishes in the small details.“For all of The Last of Us’ grand themes, it’s also a show that relishes in the small details as well. The pure, sweeping snowy landscapes juxtapose against the horrors that await anyone who steps beyond Jackson’s perimeter – a torn-apart bear now in eternal winter hibernation, for example. Another: Ellie is listening to Nirvana’s cover of “Love Buzz,”, which is a nice nod to where these characters’ journeys will eventually take them - both emotionally and geographically. All of these little touches build to a greater whole successfully and result in a strong reintroduction into the world of The Last of Us.
    0 Комментарии 0 Поделились 58 Просмотры
  • WEWORKREMOTELY.COM
    Waite and Associates: Data and Client Services Coordinator
    This is a part or full time position. Seeking 100% remote for US based candidates only.  We are looking for applicants in the US due to time zone alignment and local compliance requirements.  We are a small financial services company based in the West Coast.  Looking for detail oriented data management specialist.  We work as a team to help clients and this position would require good team work with financial advisors and other teamembers in helping with data management and client service work.  Skills include accurate data entry and management of client information.  Professional communication with clients both written and verbal with use of phone and some video if needed.  Seeking customer service oriented individual with excellent multi-tasking and time management skills.   
    0 Комментарии 0 Поделились 47 Просмотры
  • WORLDARCHITECTURE.ORG
    Alvisi Kirimoto reimagines the classical temple as a living organism at Milan Design Week
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" The international studio Alvisi Kirimoto unveiled an installation in the Exhibition-Event Cre-Action by Interni during Fuorisalone 2025.The historic courtyard of Università degli Studi di Milano "La Statale" is brought to life with the project, TAM TAM. Temple, Action, Movement, which invites guests to participate in introspection and collective action.From April 7 to April 17, 2025, TAM TAM. Temple, Action, Movement is accessible for visitors in the ancient courtyard of Università degli Studi di Milano "La Statale." Alvisi Kirimoto reimagines the traditional temple as a living, breathing organism that is constantly evolving, rather than as a static monument, inspired by the exhibition's subject, which blends Creativity and Action. The six columns of different diameters in the 6 x 6 x 5 m installation are dynamic components that guests can move and rearrange to change the area in real time.The installation incorporates the ideas of flexibility and participation with the classical principles of firmitas, utilitas, and venustas. The columns, which have historically represented stability, now serve as a concrete metaphor for how interpersonal relationships evolve."With TAM TAM. Temple, Action, Movement, we started with the idea of the Temple, transforming it into a dynamic organism that adapts and responds to the needs of those who inhabit it. For us, architecture is not just about form, but about relationships and sensory experience," explained Junko Kirimoto, co-founder of the Alvisi Kirimoto studio. "Our goal was to create a space in constant transformation, one that fosters interaction and allows each visitor to become an integral part of its evolutionary process." "Architecture thus becomes an open dialogue, a continuous encounter between the individual and the environment that hosts them, where context and experience intertwine in mutual transformation," Kirimoto added.The complex nature of TAM TAM. Temple, Action, Movement explores how human interactions and space interact. On the one hand, architecture directs the visitor even if it is changeable; the placement of the columns, their size, and the spaces they create subtly imply passageways, rest spots, and places for conversation. However, by shifting the columns, visitors to the installation alter not only its layout but also the web of connections it suggests: a corridor delineated by the columns either narrows to delineate more private and secluded spaces or widens into a communal area akin to a square. The form and significance of the space are determined by human decisions, which are ongoing and always evolving. In this dynamic conflict between space and action, architecture "proposes," people "respond," and people "reinterpret," revealing the true character of the installation. An architecture that communicates rather than imposes; that encourages change via interpersonal contact rather than dictating.If it were a blank page only waiting to be filled, the structure's white embodies the idea of possibility like itself. Alvisi Kirimoto highlights the essence of the space, the purity of the forms, and, most importantly, the core of the human experience by removing colors, textures, and superfluous decorations in favor of the installation's dynamic elements, such as the movement of the columns, visitor gestures, and the voids that are created and filled.The National Consortium for the Collection, Recycling, and Recovery of Plastic Packaging, or COREPLA, is a strategic hub between companies, municipalities, and citizens. As such, TAM TAM. Temple, Action, Movement is constructed from recycled plastic in accordance with a design approach that emphasizes material life cycles. The Consortium works to properly manage the lifecycle of plastic packaging, which is clearly in the public interest. COREPLA aims to meet the recycling and recovery goals set by the European Union by uniting about 2,500 businesses within the plastic packaging supply chain.The installation may become an itinerant project after the event, and the materials used to create it will be recycled into new goods, giving the installation a second chance at life.SketchFloor planElevationAlvisi Kirimoto imagined an educational center like "a light leaf" in the Florence green landscape, Italy. In addition, Alvisi Kirimoto together with Studio Gemma added a bold and floating educational hub to the LUISS University Campus, Rome, Italy. Project factsProject name: TAM TAM. Temple, Action, MovementEvent: Fuorisalone 2025: Exhibition-Event Interni Cre-ActionLocation: Università degli Studi di Milano 'La Statale', Cortile d'Onore del ‘600, Via Festa del Perdono 7, MilanProject: Alvisi KirimotoDates: 7 - 17 April 2025Realisation: COREPLA – National Consortium for the Collection, Recycling, and Recovery of Plastic PackagingStructures: BUROMILANSet-Up: Primo TascoTube processing: Plastica CesenaTube coating: Ideal VerniciatureAll images © Giuseppe Miotto / Marco Cappelletti Studio.All drawings © Alvisi Kirimoto.> via Alvisi Kirimoto
    0 Комментарии 0 Поделились 57 Просмотры
  • WWW.CNET.COM
    Today's NYT Mini Crossword Answers for Monday, April 14
    Here are the answers for The New York Times Mini Crossword for April 14.
    0 Комментарии 0 Поделились 51 Просмотры
  • WWW.IAMAG.CO
    The Art of Amir Zand
    cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
    0 Комментарии 0 Поделились 42 Просмотры
  • REALTIMEVFX.COM
    No More Sketch Events?
    I have not seen any new Sketch Event for a long time. I would like to see it coming back, even if I don’t participate most time, it’s a great activity to come here and see how others think differently, take approaches on the same topic, and get some inspiration form it. 2 posts - 2 participants Read full topic
    0 Комментарии 0 Поделились 63 Просмотры
  • WWW.ARCHITECTURAL-REVIEW.COM
    Competition results: Peja Culture Pavilion winner revealed
    The winners of an open international competition to revitalise and transform a neglected public space in the centre of Peja, Kosovo have been announced Open to everyone and organised by Buildner, the anonymous competition sought proposals to revitalise the site of a 15th-century water fountain which has played a crucial role in the social and cultural history of the settlement but has fallen into disrepair in recent years. Alexandra Ilinca Domnescu, Daria-Alexandra Pirvu, and Mario Eduard Peiciu from Romania won first prize and a student award for their ‘Trace’ proposal featuring an oval amphitheatre and a cultural pavilion. The winning concept features mirrored elements intended to enhance visual continuity with tiered seating designed to improve acoustic performance during events. Second place and the sustainability award went to Jiongyuan Chen from China while third place was awarded to Shpend Pashtriku, Sarah-Alexandra Agill, and Kaltrina Pashtriku from the UK. The €50,000 project, backed by Collective Action for Culture which specialises in rejuvenating urban spaces through artistic expression and community involvement, aims to upgrade the site and deliver a ‘flexible multipurpose pavilion and an engaging outdoor space’ which integrates and celebrates the water fountain. The winning entries will be considered for construction by the project backers. Located in the Rugova mountainous region around 70km west of Pristina, Peja is the fourth largest city in Kosovo with a population of 96,500 people. Local landmarks include the Kinema Jusuf Gërvalla which was constructed in the 1950s and reconstructed following the war at the turn of the Millennium. The Peja Culture Pavilion contest focused on the site of a neglected water fountain located on the city’s main boulevard close to the river and central square. The project aims to revitalise and enhance the site by delivering a new pavilion that provides a ‘versatile environment for social events, art activities, and community gatherings.’ Key aims of the project include transforming the dilapidated public space into a ‘vibrant public hotspot’ and achieving a balance of ‘contemporary architectural advancements with historical conservation.’ Proposals for the €50,000 intervention were required to include a 50-70m² pavilion structure suitable for exhibitions and community meetings, an outdoor amphitheatre, a central feature around the water fountain, green spaces and public art. Competition site: Peja Culture Pavilion, Kosovo
    0 Комментарии 0 Поделились 55 Просмотры