• Michael Maltzan Architecture, Studio Zewde, and Once-Future Office to overhaul Toledo Museum of Arts galleries
    www.facebook.com
    In just a few years time, the galleries inside the Toledo Museum of Art will look different. Michael Maltzan Architecture, Studio Zewde, and Once-Future Office have been selected by TMA officials to carry out a comprehensive reinstallation.In just a few years time, the galleries inside the Toledo Museum of Art (TMA)a 1901 building by Edward Brodhead Green and Harry W. Wachterwill look completely different. Michael Maltzan Architecture, Studio Zewde, and Once-Future Office have been selected by TMA officials to carry out a compre...
    0 Comentários ·0 Compartilhamentos ·154 Visualizações
  • The right facade transforms the atmosphere of any space
    www.facebook.com
    The right facade transforms the atmosphere of any space. At YKK AP, (I am an Architect) we develop award-winning facade solutions that reduce carbon, withstand harsh weather conditions, and provide long-lasting durability protecting the structures you build, and the moments made within them. https://www.ykkap.com
    0 Comentários ·0 Compartilhamentos ·159 Visualizações
  • Half-Life 2: Episode 3 New Footage Showcased in 20th Anniversary Documentary
    gamingbolt.com
    CelebratingHalf-Life 2s20th anniversary, Valve recently released a documentary detailing the legendary first-person shooters development (in addition to giving the game away for free for a limited period). During that documentary, the company also showcases glimpses of a project that was highly anticipated, but never saw the light of day- that, of course, beingHalf-Life 2: Episode 3.In the documentary, Valve has also revealed completely new and previously unseen footage of the cancelled project. It showcases a new enemy blob-shaped shapeshifting enemy type that could move through grates or split up into smaller versions of itself. Also shown off as a new weapon called the Ice Gun, which did exactly what its name suggested.In the documentary, members ofEpisode 3sdevelopment team reveal that Valve had developed multiple levels of the game, before the project was put on hold so that the development team could help wrap upLeft 4 Deadsdevelopment. By the time that was done, however, it was decided by Valve leadership (including Gabe Newell) that it was too late to return toHalf-Life 2: Episode 3.You can view the full documentary below.
    0 Comentários ·0 Compartilhamentos ·137 Visualizações
  • Atelier Yumia: The Alchemist of Memories and the Envisioned Land Trailer Introduces More Villains
    gamingbolt.com
    Gust revealed some new details over the weekend for Atelier Yumia: The Alchemist of Memories and the Envisioned Land, including a new trailer. The next iteration in the Atelier series focuses on newcomer Yumia Liessfeldt, who explores the ruins of the Aladissian Empire. Though various allies join her, theyre opposed by a mysterious group.Weve already seen the werewolf-like individual with a monocle, who remains as boisterous as ever and the dragon-like creature. However, theres also a hooded, seemingly human figure with a snake-shaped staff and a sheep-like witch who manipulates lightning and seems resentful towards alchemists.Perhaps theyre homunculus, created by the disaster leading to the empires downfall and the outlawing of Alchemy. Whether theyre targeting Yumia for her alchemical abilities or her past (especially regarding her mother) remains to be seen. Either way, shell stop at nothing to uncover the truth.Atelier Yumia: The Alchemist of Memories and The Envisioned Land is available on March 21st, 2025, for Xbox Series X/S, Xbox One, PS4, PS5, PC, and Nintendo Switch.
    0 Comentários ·0 Compartilhamentos ·143 Visualizações
  • www.facebook.com
    #GamesMix | #GTAVI
    0 Comentários ·0 Compartilhamentos ·161 Visualizações
  • !
    www.facebook.com
    ! #GamesMix | #KaiCenat
    0 Comentários ·0 Compartilhamentos ·162 Visualizações
  • Support Vector Machine (SVM) Algorithm
    www.marktechpost.com
    Support Vector Machines (SVMs) are a powerful and versatile supervised machine learning algorithm primarily used for classification and regression tasks. They excel in high-dimensional spaces and are particularly effective when dealing with complex datasets. The core principle behind SVM is to identify the optimal hyperplane that effectively separates data points into different classes while maximizing the margin between them.SVMs have gained significant popularity due to their ability to handle both linear and non-linear classification problems. By employing kernel functions, SVMs can map data into higher-dimensional feature spaces, capturing intricate patterns and relationships that may not be apparent in the original space.Why Use SVM?Effective in High-Dimensional Spaces: SVM can handle high-dimensional data without overfitting, making it suitable for complex problems.Versatile: It can be used for both linear and non-linear classification and regression tasks.Robust to Outliers: SVM is relatively insensitive to outliers, which can improve its performance on noisy datasets.Memory Efficient: SVM models are relatively compact, making them efficient in terms of storage and computational resources.Linear SVMIn a linearly separable dataset, the goal is to find the hyperplane that maximizes the margin between the two classes. The margin is the distance between the hyperplane and the closest data points from each class, known as support vectors.The equation of a hyperplane in d-dimensional space is:w^T * x + b = 0where:w: Weight vectorx: Input feature vectorb: Bias termThe decision function for a new data point x is:f(x) = sign(w^T * x + b)The optimization problem for maximizing the margin can be formulated as:Maximize: Margin = 2 / ||w||Subject to: yi * (w^T * xi + b) >= 1, for all iwhere:yi: Class label of the ith data pointNon-Linear SVMFor non-linearly separable data, SVM employs the kernel trick. The kernel function maps the data from the original space to a higher-dimensional feature space where it becomes linearly separable. Common kernel functions include:Polynomial Kernel:K(x, y) = (x^T * y + c)^dRadial Basis Function (RBF) Kernel:K(x, y) = exp(-gamma * ||x y||^2)Limitations of SVMSensitivity to Kernel Choice: The choice of kernel function significantly impacts SVMs performance.Computational Complexity: Training SVM can be computationally expensive, especially for large datasets.Difficulty in Interpreting Results: SVM models can be difficult to interpret, especially when using complex kernel functions.Understanding Where to Apply the SVM AlgorithmAre you unsure where to use the Support Vector Machine (SVM) algorithm? Lets explore its ideal applications and the types of tasks and data it excels at.Key Applications of SVMText ClassificationImage ClassificationBioinformaticsFinancial Data AnalysisSVM works best with well-defined classes, clear decision boundaries, and a moderate amount of data. It is particularly effective when the number of features is comparable to or larger than the number of samples.ConclusionSupport Vector Machine is a versatile and powerful algorithm for classification and regression tasks. Its ability to handle high-dimensional data, its robustness to outliers, and its ability to learn complex decision boundaries make it a valuable tool in the machine learning toolkit. However, to achieve optimal performance, careful consideration of the kernel function and computational resources is necessary. Pragati Jhunjhunwala+ postsPragati Jhunjhunwala is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Kharagpur. She is a tech enthusiast and has a keen interest in the scope of software and data science applications. She is always reading about the developments in different field of AI and ML. LinkedIn event, 'One Platform, Multimodal Possibilities,' where Encord CEO Eric Landau and Head of Product Engineering, Justin Sharps will talk how they are reinventing data development process to help teams build game-changing multimodal AI models, fast
    0 Comentários ·0 Compartilhamentos ·138 Visualizações
  • [AI/ML] Spatial Transformer Networks (STN) Overview, Challenges And Proposed Improvements
    towardsai.net
    Author(s): Shashwat Gupta Originally published on Towards AI. The modification of dynamic spatial information through spatial transformer networks (STNs) allows models to handle transformations such as scaling and rotation for subsequent tasks. They enhance recognition accuracy by enabling models to focus on essential visual regions with minimal dependence on pooling layers. This blog delves into the functional advantages and disadvantages of STNs, despite the extensive coverage in multiple studies. We also examine P-STN, a potential upgrade from 2020 including enhanced transformations and increased efficiency. The construction of more adaptable and precise machine learning models relies on an understanding of STNs and their advancements.Disclaimer: Much of this section is inspired by the original paper on Spatial Transformer Networks [1,2,3]Spatial Transformer Networks (STN):STNs (Spatial Transformer Networks), by Max Jaderberg et al., are modules that can learn to adjust the spatial information in a model, making it more resistant to changes like warping. Before STNs, achieving this required many layers of Max-Pooling. Unlike pooling layers, which have fixed and small areas they examine, spatial transformers can dynamically change an image or feature map by applying different transformations for each input. These transformations affect the entire feature map and can include scaling, cropping, rotating, and bending.This capability allows networks to focus on important parts of an image (a process called attention) and adjust these parts to a standard position, making it easier to recognize them in later layers. STNs expand on the idea of attention modules by handling spatial transformations. They can be trained using regular back-propagation, which means the entire model can be trained all at once. STNs are useful for various tasks, including image classification, locating objects, and managing spatial attention.Figure 1 : STN (Source: https://arxiv.org/pdf/1612.03897.pdf)The STN Consists of the following 3 parts:Localisation NetGrid GeneratorSampler1. Localisation Network:It takes the input feature map U R HWC , and outputs the parameters of transformation ( = floc(U)). It can take any form but should include a final regressor layer to produce the transformation parameters 2. Parametrised Grid Sampling :The output pixels are computed by applying a sampling kernel centred at each location of the input feature map. The only constraint is that the transformation should be different wrt the parameters to allow for back-propagation. A good heuristic is to predict the transformation parametrised in a low dimensional way so that the complexity of the task assigned to the localisation network is reduced, and it can also learn about the target grid representation. e.g. if _ = M_B, where B is the target representation. Thus, it is also possible to learn and B.In our case, we analyze 2D transformations, which the following equation can overall summarise:3. Differentiable Image Sampling:Differentiable Image Sampling To perform a spatial transformation of the input feature map, a sampler must take the set of sampling points T(G), along with the input feature map U, and produce the sampled output feature map V . Each (x s i , ys i ) coordinate in (G) defines the spatial location in the input where a sampling kernel is applied to get the value at a particular pixel in the output V . This can be written as:where and are the parameters of a generic sampling kernel k() which defines the image interpolation (e.g. bilinear), U^c_{nm} is the value at location (n, m) in channel c of the input, and V^c_i is the output value for pixel i at location (x^t_i , y^t_i ) in channel c. Note that the sampling is done identically for each channel of the input, so every channel is transformed identically (this preserves spatial consistency between channels).In theory, any sampling kernel can be used, as long as (sub-)gradients can be defined with respect to x^s_i and y^s_i . For example, using the integer sampling kernel reduces the above equation to:where [x+ 0.5] rounds x to the nearest integer and () is the Kronecker delta function. This sampling kernel equates to just copying the value at the nearest pixel to (x s i , ys i ) to the output location (x t i , yt i ). Alternatively, a bilinear sampling kernel can be used, givingTo allow backpropagation of the loss through this sampling mechanism, we can define the gradients with respect to U and G. For bilinear sampling above equation, the partial derivatives areThis gives us a (sub-)differentiable sampling mechanism, allowing loss gradients to flow back not only to the input feature map but also to the sampling grid coordinates and, therefore, back to the transformation parameters and localization network since x^{s}_i / and y^{s}_{i}/ can be easily derived. Due to discontinuities in the sampling functions, sub-gradients must be used. This sampling mechanism can be implemented very efficiently on GPU by ignoring the sum over all input locations and instead just looking at the kernel support region for each output pixel.For better warping, the STNs can be cascaded by passing the output of one STN to the next (as in [2]) and with additional input to condition (as in [1])Pros and cons of STNs :The overall pros of STNs are :STNs are very fast, and the application does not require making many modifications to the downstream modelThey can also be used to downsample or oversample a feature map (downsampling with fixed, small support might lead to an aliasing effect)Multiple STNs can be used. The combination can be in Series (for more complex feature learning, with the input of one STN going into another, with or without an unwarped conditional input.Parallel combinations are effective when there are more than one parts to focus on in images (It was shown that of 2 STNs used on the CUB-2002011 bird classification dataset, one became head-detector and the other became body-detector)However, STNs are notoriously known to suffer from the following 2 defects :1. Boundary effect arises as the image is propagated and not the geometric information (e.g. if an image is rotated, STNs can fix the rotation, but they do not fix the degraded boundary effects like cut corners etc.). This could be solved by boundary aware sampling:2. Single STN application is insufficient to learn complex transformations This could be solved by hierarcial cascaded STNs (i.e. STNs in series) with multi-scale transformations.3. Training Difficulty: Hard to train due to sensitivity to small mis-predictions in transformation parameters solved in P-STN (below)4. Sensitivity to Errors: Mis-predicted transformations can lead to poor localization, adversely affecting downstream tasks solved in P-STN (below)P-STN : an improvement over STNProbabilistic Spatial Transformer Networks (P-STN) by Schwbel et al. [7], address the limitations 3 and 4 by introducing a probabilistic framework to the transformation process. Instead of predicting a single deterministic transformation, P-STN estimates a distribution over possible transformations (probabilistic Transformation).Figure 2 : The P-STN pipeline. From the observed image I, a distribution of transformations is estimated. Samples from this distribution are applied to the observed image to produce augmented samples, which are fed to a classifier that averages across samples. In the deterministic STN case, the localizer only computes one transformation (I), which can be thought of as the maximum likelihood solution. Instead of the multiple transformation samples, we obtain a single T_{}^{I} in this caseThis probabilistic approach offers several key improvements:Robustness Through Marginalization:Multiple Transformations: By sampling multiple transformations from the estimated distribution, P-STN effectively looks at the input from various perspectives. This marginalization over transformations mitigates the impact of any single mis-predicted transformation.Smoother Loss Landscape: The integration over multiple transformations results in a more stable and smoother loss landscape, facilitating easier and more reliable training.2. Enhanced Data Augmentation:Learned Augmentations: The stochastic transformations serve as a form of learned data augmentation, automatically generating diverse training samples that improve the models generalization capabilities.Improved Downstream Performance: This augmentation leads to better classification accuracy, increased robustness, and improved model calibration.3. Applicability to Diverse Domains:While initially designed for image data, P-STNs probabilistic nature allows it to generalize effectively to non-visual domains, such as time-series data, further demonstrating its versatility.The mathematical equations for the changes are as follows:Illustrative Benefits:Reduced Sensitivity to Transformation Errors:STN LossNegative Log-Likelihood of a Single TransformationP-STN LossAverage Negative Log-Likelihood Over Multiple TransformationsBy averaging over multiple transformations, P-STN reduces the impact of any single erroneous transformation, leading to a more stable and reliable training process.Improved Calibration:Calibration Error_STN > Calibration Error_P-STNP-STNs approach of considering multiple transformations results in better-calibrated probabilities, as evidenced by lower calibration errors compared to STN.Probabilistic Spatial Transformer Networks enhance the original STN framework by introducing a distribution over possible spatial transformations. This probabilistic approach leads to more robust training, effective data augmentation, improved classification performance, and better-calibrated models. The integration of variational inference and Monte Carlo sampling in P-STN provides a principled way to handle transformation uncertainties, making it a significant advancement over traditional STNs.I write about technology, investing and books I read. Here is an index to my other blogs (sorted by topic): https://medium.com/@shashwat.gpt/index-welcome-to-my-reflections-on-code-and-capital-2ac34c7213d9References :Paper: IC-STN: https://arxiv.org/pdf/1612.03897.pdfSTN: https://paperswithcode.com/method/stnVideo: https://www.youtube.com/watch?v=6NOQC_fl1hQ&t=162s (with slides, CV reading group resources)Paper: Lenc and A. Vedaldi. Understanding image representations by measuring their equivariance and equivalence. CVPR, 2015 (defines affine invariance, equivariance, and equivalence criterion)STN PyTorch Implementation: https://pytorch.org/tutorials/ intermediate/spatial_transformer_tutorial.htmlScatter Nets: https://paperswithcode.com/paper/ invariant-scattering-convolution-networks#codeP-STN: https://backend.orbit.dtu.dk/ws/portalfiles/portal/280953750/2004.03637.pdfJoin thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Comentários ·0 Compartilhamentos ·157 Visualizações
  • Best Internet Providers in Massachusetts
    www.cnet.com
    CNET breaks down the top internet providers across Massachusetts, from Boston to Provincetown.
    0 Comentários ·0 Compartilhamentos ·154 Visualizações
  • Today's NYT Mini Crossword Answers for Monday, Nov. 18
    www.cnet.com
    Looking forthe most recentMini Crossword answer?Click here for today's Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands and Connections puzzles.TheNew York Times Crossword Puzzleis legendary. But if you don't have that much time, theMini Crosswordis an entertaining substitute. The Mini Crossword is much easier than the old-school NYT Crossword, and you probably can complete it in a couple of minutes. But if you're stuck, we've got the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.The Mini Crossword is just one of many games in the Times' games collection. If you're looking for today's Wordle, Connections and Strands answers, you can visitCNET's NYT puzzle hints page.Read more: Tips and Tricks for Solving The New York Times Mini CrosswordLet's get at those Mini Crossword clues and answers. The completed NYT Mini Crossword for Nov. 18, 2024. NYT/Screenshot by CNETMini across clues and answers1A clue: __ Martin, frequent collaborator with 1-DownAnswer: STEVE6A clue: Parts of irrigation systemsAnswer: HOSES7A clue: BeginningAnswer: ONSET8A clue: Backup camera's place on a carAnswer: REAR9A clue: Make an attemptAnswer: TRYMini down clues and answers1D clue: Martin ___, frequent collaborator with 1-AcrossAnswer: SHORT2D clue: Stuff in a printer cartridgeAnswer: TONER3D clue: Common kind of test for a literature classAnswer: ESSAY4D clue: Make a sudden turnAnswer: VEER5D clue: Jokey suffix with bestAnswer: ESTHow to play more Mini CrosswordsThe New York Times Games section offers a large number of online games, but only some of them are free for all to play. You can play the current day's Mini Crossword for free, but you'll need a subscription to the Times Games section to play older puzzles from the archives.
    0 Comentários ·0 Compartilhamentos ·157 Visualizações