0 Comentários
0 Compartilhamentos
32 Visualizações
Diretório
Diretório
-
Faça o login para curtir, compartilhar e comentar!
-
UNITY.COMMazda and Unity: Pioneering a new future for automotive cockpit HMIWith market-leading multiplatform support and efficient development workflows for user experiences (UX), the Unity Engine and Editor are becoming the go-to solution for carmakers to develop their next-generation in-vehicle Human-Machine-Interfaces (HMI).On March 7, 2024, Unity Japan publicly announced a partnership with Mazda Motor Corporation to embed Unity in future Mazda vehicles. In a conversation with Seiji Goto, general manager of Infotainment and Cockpit Electronics at Mazda, we gained insight on their perspective on HMI and Unity.A vision for 2030: Driving forward with MazdaAs part of Mazda’s ambitious 2030 roadmap, research and development is accelerated in many areas, including HMI. The aim is to take on the challenge of simultaneously improving safety and value for customers through intuitive, great-looking, and responsive UX. Mazda will work directly with Unity to create a more “human-centric” in-vehicle experience.“Drivers process a variety of information while driving, and we believe it is important for them to be able to recognize and understand information inside and outside the car intuitively, and to operate the car intuitively,” explains Mr. Goto.The current world of HMIOver the last few years, the amount of information passed to drivers and passengers in vehicles has increased. Goto, who joined Mazda in 2015, points to the move from hard disks to cloud-connected vehicles, and the continuous growth of data to be managed just for the navigation systems alone.Throughout the industry, the same is true for the recent advent of Advanced Driver Assistance Systems (ADAS), where the amount of information displayed has scaled with system performance. A key challenge is to convey relevant information to the driver in an easy-to-understand manner while keeping the system quick to react and distraction free.Bridging the technology gap and the human-machine gapIn-vehicle HMIs now require technologies that were pioneered in the games industry. Systems like a scene tree for 3D graphics, animation blending, or easily exchangeable prefabs are a challenging thing to build from scratch, but these are standard in the video game industry. These technologies are ideal to tackle the challenges of modern HMI.“By utilizing Unity’s expertise in real-time 3D rendering for our user interface to spatially represent information from many car systems, we will be able to reduce the time and burden on the driver to recognize and understand information, realizing a safer and more convenient driving experience,” says Goto.Which UI works best differs between individuals. But Goto believes that by using Unity, the HMI can be personalized to meet each driver’s individual requirements.Unity and Mazda: A strategic partnershipFor carmakers, it’s important to have an integrated development environment that allows designers, developers, and other contributors to iterate on the project efficiently. Over the course of their search for a toolchain to power their next-generation HMI, Unity emerged as an innovative, future-looking solution.Goto outlines a multitude of reasons that make Unity a clear choice for automotive HMI:An active community provides a trove of documentation, tools, and solutions.The ability to tap into a large user base of game developers makes it easy to hire Unity experts anywhere in the world.Unity has a track record of multiplatform adaptability that reduces the risk of long-term technological changes.Development tools have ease-of-use benefits.The partnership between Mazda and Unity Technologies Japan Corporation is a milestone in automotive HMI development. “Mazda is accelerating research and development in all areas under the 2030 Management Policy,” said Michihiro Imada, Mazda’s executive officer in charge of Integrated Control System Development.“In the cockpit HMI area, Mazda will continue to evolve the interface between the human and car based on the ‘human centric’ development concept to deliver exciting mobility experiences. Specifically, Mazda will take on the challenge of further improving safety and convenience by enabling intuitive human operation and creating new value for vehicles.”Mr. Imada continues, saying, “By working with Unity, which is highly regarded globally for its technical capabilities and high quality in the rapidly innovating game industry, Unity can offer graphical user interface (GUI) solutions in the cockpit HMI and advance Mazda’s goal of human-centric vehicle engineering.”Future outlookWith the complex processes involved in creating an automotive HMI experience, there is a lot of exchange between departments such as marketing, manufacturing, UX design, and software engineering. On top of embedded HMI, Unity’s real-time 3D (RT3D) capabilities are used for VR-based UX testing, prototyping, engineering and design visualizations, car configurators, operational digital twins, and other applications in the automotive sector.Mazda believes that it is possible to introduce Unity in each of these departments, and if they do so successfully, they will be able to communicate through the same development environment. This will help to make the work itself more enjoyable and encourage more customer-oriented proposals. By building an open development environment and system, better products can be created.Learn more about how Unity can boost your HMI project at unity.com/hmi0 Comentários 0 Compartilhamentos 31 Visualizações
-
TECHCRUNCH.COMWorld partners with Tinder, Visa to bring its ID-verifying tech to more placesWorld, the biometric ID company best known for its eyeball-scanning Orb devices, on Wednesday announced several partnerships aimed at driving sign-ups and demonstrating the applications of its tech. World is partnering with Match Group, the dating app conglomerate, to verify the identities of Tinder users in Japan using World’s identity verification system. Additionally, World has established separate collaborations with both the prediction market startup Kalshi and the decentralized lending platform Morpho; these partnerships enable customers to sign in to these services using their IDs already registered with World. And World plans to team up with Visa to launch The World Card, a card that lets users spend digital assets anywhere Visa is accepted. Image Credits:World Since its founding in 2019, World, developed by San Francisco- and Berlin-based Tools for Humanity, has raised hundreds of millions of dollars in venture capital and created digital IDs for millions of users. But it has yet to breach the mainstream, in part because of its cumbersome approach to verifying IDs. With these new partnerships, World is going after a broader audience — one that previously might not have considered having their eyeballs scanned to verify their “humanness.” The World Card is perhaps the most interesting of the new projects. Expected to become available in the U.S. later this year, it’ll connect to World’s World App and allow users to transact with cryptocurrencies. The card will automatically exchange crypto to fiat when needed, and potentially offer certain rewards for specific “AI subscriptions and services.” Topics0 Comentários 0 Compartilhamentos 25 Visualizações
-
VENTUREBEAT.COMEpic Games touts victory in latest court ruling in Apple antitrust caseA federal district court judge found that Apple willfully violated a court order in Epic Games vs. Apple antitrust case.Read More0 Comentários 0 Compartilhamentos 35 Visualizações
-
VENTUREBEAT.COMQwen swings for a double with 2.5-Omni-3B model that runs on consumer PCs, laptopsThe Qwen2.5-Omni-3B model is licensed for non-commercial use only under Alibaba Cloud’s Qwen Research License Agreement.Read More0 Comentários 0 Compartilhamentos 34 Visualizações
-
WWW.THEVERGE.COMThe BBC deepfaked Agatha Christie to teach a writing courseBBC Studios is using AI to recreate the voice and likeness of late detective story author Agatha Christie for the purpose of featuring it in digital classes that teaches prospective writers “how to craft the perfect crime novel.” A real life actor, Vivien Keene, is standing in for Christie, with her appearance augmented by AI to resemble the author. The new class, called Agatha Christie Writing, is available today on BBC Maestro, the company’s $10-per-month online course service that usually gives you access to content from living professionals teaching things like graphic design, bread making, time management, and more. Deepfaked Agatha Christie’s teachings are “in Agatha’s very own words,” her great-grandson James Prichard said in a press release. It uses insights from the real Christie and is scripted by academics — so the actual content appears to be human-made and not generated from a model that’s been fed all of her work. BBC collaborated with Agatha Christie Estate and used restored audio recordings, licensed images, interviews, and her own writings to make this all happen. Live now, the class has 11 video lessons with 12 exercises for prospective writing students, including how to “structure an airtight plot” and “build suspense.”0 Comentários 0 Compartilhamentos 29 Visualizações
-
TOWARDSDATASCIENCE.COMModern GUI Applications for Computer Vision in PythonIntroduction I’m a huge fan of interactive visualizations. As a computer vision engineer, I deal almost daily with image processing related tasks and more often than not I am iterating on a problem where I need visual feedback to make decisions. Let’s think of a very simple image processing pipeline with a single step that has some parameters to transform an image: How do you know which parameters to adjust? Does the pipeline even work as expected? Without visualizing your output, you might miss out on some key insights and make sub optimal choices. Sometimes simply showing the output image and/or some calculated metrics can be enough to iterate on the parameters. But I’ve found myself in many situations where a tool would be immensely helpful to iterate quickly and interactively on my pipeline. So in this article I will show you how to work with simple built-in interactive elements from OpenCV as well as how to build more modern user interfaces for Computer Vision projects using customtkinter. Prerequisites If you want to follow along, I recommend you to set up your local environment with uv and install the following packages: uv add numpy opencv-Python pillow customtkinter Goal Before we dive into the code of the project, let’s quickly outline what we want to build. The application should use the webcam feed and allow the user to select different types of filters that will be applied to the stream. The processed image should be shown in real-time in the window. A rough sketch of a potential UI would look as follows: OpenCV – GUI Let’s start with a simple loop that fetches frames from your webcam and displays them in an OpenCV window. import cv2 cap = cv2.VideoCapture(0) while True: ret, frame = cap.read() if not ret: break cv2.imshow("Video Feed", frame) key = cv2.waitKey(1) & 0xFF if key == ord('q'): break cap.release() cv2.destroyAllWindows() Keyboard Input The simplest way to add interactivity here, is by adding keyboard inputs. For example, we can cycle through different filters with the number keys. ... filter_type = "normal" while True: ... if filter_type == "grayscale": frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) elif filter_type == "normal": pass ... if key == ord('1'): filter_type = "normal" if key == ord('2'): filter_type = "grayscale" ... Now you can switch between the normal image and the grayscale version by pressing the number keys 1 and 2. Let’s also quickly add a caption to the image so we can actually see the name of the filter we’re applying. Now we need to be careful here: if you take a look at the shape of the frame after the filter, you will notice that the dimensionality of the frame array has changed. Remember that OpenCV image arrays are ordered HWC (height, width, color) with color as BGR (green, blue, red), so the 640×480 image from my webcam has shape (480, 640, 3). print(filter_type, frame.shape) # normal (480, 640, 3) # grayscale (480, 640) Now because the grayscale operation outputs a single channel image, the color dimension is dropped. If we now want to draw on top of this image, we either need to specify a single channel color for the grayscale image or we convert that image back to the original BGR format. The second option is a bit cleaner because we can unify the annotation of the image. if filter_type == "grayscale": frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) elif filter_type == "normal": pass if len(frame.shape) == 2: # Convert grayscale to BGR frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR) Caption I want to add a black border at the bottom of the image, on top of which the name of the filter will be shown. We can make use of the copyMakeBorder function to pad the image with a border color at the bottom. Then we can add the text on top of this border. # Add a black border at the bottom of the frame border_height = 50 border_color = (0, 0, 0) frame = cv2.copyMakeBorder(frame, 0, border_height, 0, 0, cv2.BORDER_CONSTANT, value=border_color) # Show the filter name cv2.putText( frame, filter_type, (frame.shape[1] // 2 - 50, frame.shape[0] - border_height // 2 + 10), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2, cv2.LINE_AA, ) This is how the output should look, and you can switch between the normal and grayscale mode and the frames will be captioned accordingly. Sliders Now instead of using the keyboard as input method, OpenCV offers a basic trackbar slider UI element. The trackbar needs to be initialized at the beginning of the script. We need to reference the same window as we will be showing our images in later, so I will create a variable for the name of the window. Using this name, we can create the trackbar and let it be a selector for the index in the list of filters. filter_types = ["normal", "grayscale"] win_name = "Webcam Stream" cv2.namedWindow(win_name) tb_filter = "Filter" # def createTrackbar(trackbarName: str, windowName: str, value: int, count: int, onChange: _typing.Callable[[int], None]) -> None: ... cv2.createTrackbar( tb_filter, win_name, 0, len(filter_types) - 1, lambda _: None, ) Notice how we use an empty lambda for the onChange callback, we will fetch the value manually in the loop. Everything else will stay the same. while True: ... # Get the selected filter type filter_id = cv2.getTrackbarPos(tb_filter, win_name) filter_type = filter_types[filter_id] ... And voilà, we have a trackbar to select our filter. Now we can also easily add more filters easily by extending our list and implementing each processing step. filter_types = [ "normal", "grayscale", "blur", "threshold", "canny", "sobel", "laplacian", ] ... if filter_type == "grayscale": frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) elif filter_type == "blur": frame = cv2.GaussianBlur(frame, ksize=(15, 15), sigmaX=0) elif filter_type == "threshold": gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) _, thresholded_frame = cv2.threshold(gray, thresh=127, maxval=255, type=cv2.THRESH_BINARY) elif filter_type == "canny": frame = cv2.Canny(frame, threshold1=100, threshold2=200) elif filter_type == "sobel": frame = cv2.Sobel(frame, ddepth=cv2.CV_64F, dx=1, dy=0, ksize=5) elif filter_type == "laplacian": frame = cv2.Laplacian(frame, ddepth=cv2.CV_64F) elif filter_type == "normal": pass if frame.dtype != np.uint8: # Scale the frame to uint8 if necessary cv2.normalize(frame, frame, 0, 255, cv2.NORM_MINMAX) frame = frame.astype(np.uint8) Modern GUI with CustomTkinter Now I don’t know about you but the current user interface does not look very modern to me. Don’t get me wrong, there is some beauty in the style of the interface, but I prefer cleaner, more modern designs. Plus we’re already at the limit of what OpenCV offers out of the box in terms of UI elements. Yep, no buttons, text fields, dropdowns, checkboxes or radio buttons and no custom layouts. So let’s see how we can transform the look and user experience of this basic application to a fresh and clean one. So to get started, we first need to create a class for our app. We create two frames: the first one contains our filter selection on the left side and the second one wraps the image display. For now, let’s start with a simple placeholder text. Unfortunately there’s no out of the box opencv component from customtkinter directly, so we will need to quickly build our own in the next few steps. But let’s first finish the basic UI layout. import customtkinter class App(customtkinter.CTk): def __init__(self) -> None: super().__init__() self.title("Webcam Stream") self.geometry("800x600") self.filter_var = customtkinter.IntVar(value=0) # Frame for filters self.filters_frame = customtkinter.CTkFrame(self) self.filters_frame.pack(side="left", fill="both", expand=False, padx=10, pady=10) # Frame for image display self.image_frame = customtkinter.CTkFrame(self) self.image_frame.pack(side="right", fill="both", expand=True, padx=10, pady=10) self.image_display = customtkinter.CTkLabel(self.image_frame, text="Loading...") self.image_display.pack(fill="both", expand=True, padx=10, pady=10) app = App() app.mainloop() Filter Radio Buttons Now that the skeleton is built, we can start filling in our components. For the left side, I will be using the same list of filter_types to populate a group of radio buttons to select the filter. # Create radio buttons for each filter type self.filter_var = customtkinter.IntVar(value=0) for filter_id, filter_type in enumerate(filter_types): rb_filter = customtkinter.CTkRadioButton( self.filters_frame, text=filter_type.capitalize(), variable=self.filter_var, value=filter_id, ) rb_filter.pack(padx=10, pady=10) if filter_id == 0: rb_filter.select() Image Display Component Now we can get started on the interesting part, how to get our OpenCV frames to show up in the image component. Because there’s no built-in component, let’s create our own based on the CTKLabel. This allows us to display a loading text while the webcam stream is starting up. ... class CTkImageDisplay(customtkinter.CTkLabel): """ A reusable ctk widget widget to display opencv images. """ def __init__( self, master: Any, ) -> None: self._textvariable = customtkinter.StringVar(master, "Loading...") super().__init__( master, textvariable=self._textvariable, image=None, ) ... class App(customtkinter.CTk): def __init__(self) -> None: ... self.image_display = CTkImageDisplay(self.image_frame) self.image_display.pack(fill="both", expand=True, padx=10, pady=10) So far nothing has changed, we simply swapped out the existing label with our custom class implementation. In our CTKImageDisplay class we can define an function to show an image in the component, let’s call it set_frame. import cv2 import numpy.typing as npt from PIL import Image class CTkImageDisplay(customtkinter.CTkLabel): ... def set_frame(self, frame: npt.NDArray) -> None: """ Set the frame to be displayed in the widget. Args: frame: The new frame to display, in opencv format (BGR). """ target_width, target_height = frame.shape[1], frame.shape[0] # Convert the frame to PIL Image format frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) frame_pil = Image.fromarray(frame_rgb, "RGB") ctk_image = customtkinter.CTkImage( light_image=frame_pil, dark_image=frame_pil, size=(target_width, target_height), ) self.configure(image=ctk_image, text="") self._textvariable.set("") Let’s digest this. First we need to know how big our image component will be, we can extract that information from the shape property of our image array. To display the image in tkinter, we need a Pillow Image type, we cannot directly use the OpenCV array. To convert an OpenCV array to Pillow, we first need to convert the color space from BGR to RGB and then we can use the Image.fromarray function to create the Pillow Image object. Next we can create a CTKImage, where we use the same image no matter the theme and set the size according to our frame. Finally we can use the configure method to set the image in our frame. At the end, we also reset the text variable to remove the “Loading…” text, even though it would theoretically be hidden behind the image. To quickly test this, we can set the first image of our webcam in the constructor. (We will see in a second why this is not such a good idea) class App(customtkinter.CTk): def __init__(self) -> None: ... cap = cv2.VideoCapture(0) _, frame0 = cap.read() self.image_display.set_frame(frame0) If you run this, you will notice that the window takes a bit longer to pop up, but after a short delay you should see a static image from your webcam. NOTE: If you don’t have a webcam ready you can also just use a local video file by passing the file path to the cv2.VideoCapture constructor call. Now this is not very exciting, since the frame doesn’t update yet. So let’s see what happens if we try to do this naively. class App(customtkinter.CTk): def __init__(self) -> None: ... cap = cv2.VideoCapture(0) while True: ret, frame = cap.read() if not ret: break self.image_display.set_frame(frame) Almost the same as before, except now we run the frame loop as we did in the previous chapter with the OpenCV GUI. If you run this, you will see… exactly nothing. The window never shows up, since we’re creating an infinite loop in the constructor of the app! This is also the reason why the program only showed up after a delay in the previous example, the opening of the Webcam stream is a blocking operation, and the event loop for the window cannot run, so it doesn’t show up yet. So let’s fix this by adding a slightly better implementation that allows the gui event loop to run while we also update the frame every once in a while. We can use the after method of tkinter to schedule a function call while yielding the process during the wait time. ... self.cap = cv2.VideoCapture(0) self.after(10, self.update_frame) def update_frame(self) -> None: """ Update the displayed frame. """ ret, frame = self.cap.read() if not ret: return self.image_display.set_frame(frame) self.after(10, self.update_frame) So now we still set up the webcam stream in the constructor, so we haven’t solved that problem yet. But at least we can see a continuous stream of frames in our image component. Applying Filters Now that the frame loop is running. we can re-implement our filters from the beginning and apply them to our webcam stream. In the update_frame function, we can check the current filter variable and apply the corresponding filter function. def update_frame(self) -> None: ... # Get the selected filter type filter_id = self.filter_var.get() filter_type = filter_types[filter_id] if filter_type == "grayscale": frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) elif filter_type == "blur": frame = cv2.GaussianBlur(frame, ksize=(15, 15), sigmaX=0) elif filter_type == "threshold": gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) _, frame = cv2.threshold(gray, thresh=127, maxval=255, type=cv2.THRESH_BINARY) elif filter_type == "canny": frame = cv2.Canny(frame, threshold1=100, threshold2=200) elif filter_type == "sobel": frame = cv2.Sobel(frame, ddepth=cv2.CV_64F, dx=1, dy=0, ksize=5) elif filter_type == "laplacian": frame = cv2.Laplacian(frame, ddepth=cv2.CV_64F) elif filter_type == "normal": pass if frame.dtype != np.uint8: # Scale the frame to uint8 if necessary cv2.normalize(frame, frame, 0, 255, cv2.NORM_MINMAX) frame = frame.astype(np.uint8) if len(frame.shape) == 2: # Convert grayscale to BGR frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR) self.image_display.set_frame(frame) self.after(10, self.update_frame) And now we’re back to the full functionality of the application, you can select any filter on the left side and it will be applied in real-time to the webcam feed! Multithreading and Synchronization Now although the application runs as is, there are some problems with the current way we run our frame loop. Currently everything runs in a single thread, the main GUI thread. This is why in the beginning, we don’t immediately see the window pop up, our webcam initialization blocks the main thread. Now imagine, if we did some heavier image processing, maybe running the images through neural network, you wouldn’t want your user interface to always be blocked while the network is running inference. This will lead to a very unresponsive user experience when clicking the UI elements! A better way to handle this in our application is to separate the image processing from the user interface. Generally this is almost always a good idea to separate your GUI logic from any type of non-trivial processing. So in our case, we will run a separate thread that is responsible for the image loop. It will read the frames from the webcam stream and apply the filters. NOTE: Python threads are not “real” threads in a sense that they do not have the capability to run on different logical cpu cores and hence will not really run in parallel. In Python multithreading the context will switch between the threads, but due to the GIL, the global interpreter lock, a single python process can only run one physical thread. If you want “real” parallel processing, you will need to use multiprocessing. Since our process here is not CPU bound but actually I/O bound, multithreading suffices. class App(customtkinter.CTk): def __init__(self) -> None: ... self.webcam_thread = threading.Thread(target=self.run_webcam_loop, daemon=True) self.webcam_thread.start() def run_webcam_loop(self) -> None: """ Run the webcam loop in a separate thread. """ self.cap = cv2.VideoCapture(0) if not self.cap.isOpened(): return while True: ret, frame = self.cap.read() if not ret: break # Filters ... self.image_display.set_frame(frame) If you run this, you will now see that our window opens up immediately and we even see our loading text while the webcam stream is opening up. However, as soon as the stream starts, the frames start to flicker. Depending on a lot of factors, you might experience different visual artifacts or errors at this stage. Warning: flashing image Now why is this happening? The problem is that we’re simultaneously trying to update the new frame while the internal refresh loop of the user interface might be using the information of the array to draw it on the screen. They are both competing for the same frame array. It is generally not a good idea to directly update the UI elements from a different thread, in some frameworks this might even be prevented and will raise exceptions. In Tkinter, we can do it, but we will get weird results. We need some type of synchronization between our threads. That’s where the Queue comes into play. You’re probably familiar with queues from the grocery store or theme parks. The concept of the queue here is very similar: the first element that goes into the queue also leaves first (First In First Out). In this case, we actually just want a queue with a single element, a single slot queue. The queue implementation in Python is thread-safe, meaning we can put and get objects from the queue from different threads. Perfect for our use case, the processing thread will put the image arrays to the queue and the GUI thread will try to get an element, but not block if the queue is empty. class App(customtkinter.CTk): def __init__(self) -> None: ... self.queue = queue.Queue(maxsize=1) self.webcam_thread = threading.Thread(target=self.run_webcam_loop, daemon=True) self.webcam_thread.start() self.frame_loop_dt_ms = 16 # ~60 FPS self.after(self.frame_loop_dt_ms, self._update_frame) def _update_frame(self) -> None: """ Update the frame in the image display widget. """ try: frame = self.queue.get_nowait() self.image_display.set_frame(frame) except queue.Empty: pass self.after(self.frame_loop_dt_ms, self._update_frame) def run_webcam_loop(self) -> None: ... while True: ... self.queue.put(frame) Notice how we move the direct call to the set_frame function from the webcam loop which runs in its own thread to the _update_frame function that is running on the main thread, repeatedly scheduled in 16ms intervals. Here it’s important to use the get_nowait function in the main thread, otherwise if we would use the get function, we would be blocking there. This call does not block, but raises a queue.Empty exception if there’s no element to fetch so we have to catch this and ignore it. In the webcam loop, we can use the blocking put function because it doesn’t matter that we block the run_webcam_loop, there’s nothing else needing to run there. And now everything is running as expected, no more flashing frames! Conclusion Combining a UI framework like Tkinter with OpenCV allows us to build modern looking applications with an interactive graphical user interface. Due to the UI running in the main thread, we run the image processing in a separate thread and synchronize the data between the threads using a single-slot queue. You can find a cleaned up version of this demo in the repository below with a more modular structure. Let me know if you build something interesting with this approach. Take care! Checkout the full source code in the GitHub repo: https://github.com/trflorian/ctk-opencv The post Modern GUI Applications for Computer Vision in Python appeared first on Towards Data Science.0 Comentários 0 Compartilhamentos 33 Visualizações
-
WWW.SKYNEWSARABIA.COMالشباب في خطر.. تراجع صادم في السعادة عالمياوبحسب صحيفة نيويورك تايمز الأميركية، تعد هذه الدراسة جزءا من "دراسة الازدهار العالمي" التي أجراها باحثون من جامعة هارفارد وجامعة بايلور، التي شملت أكثر من 200,000 شخص من 20 دولة حول العالم. تراجع الرفاهية كشفت الدراسة عن أن الشباب في العديد من البلدان يعانون من مشاكل صحية عقلية وجسدية، ويواجهون تحديات في إيجاد معنى لحياتهم وفي بناء علاقات اجتماعية ناجحة. وقال تايلر جيه. فاندروايل، الباحث الرئيس في الدراسة ومدير برنامج الازدهار البشري في هارفارد: "إنه صورة صارخة"، مضيفًا أن النتائج تثير سؤالا مهما: "هل نستثمر بما فيه الكفاية في رفاهية الشباب؟" وقد أظهرت البيانات أن الرفاهية بين الشباب أقل من المتوسط، مع ارتفاع معدلات القلق والاكتئاب، فضلا عن انخفاض المشاركة في الأنشطة الاجتماعية. وأشار تقرير آخر من كلية التعليم في جامعة هارفارد لعام 2023 إلى أن الشباب في الولايات المتحدة يعانون من ضعف في الصحة النفسية مقارنة بالمراهقين، حيث سجلوا معدلات مرتفعة من القلق والاكتئاب. وتشير النتائج إلى أن المشكلة تتفاقم بشكل خاص في الولايات المتحدة، حيث كانت الفجوة بين مستويات الازدهار بين الشباب وكبار السن هي الأكبر. في المقابل، أظهرت بعض الدول مثل بولندا وتنزانيا انخفاضا في الازدهار مع تقدم العمر، بينما ظهرت بعض الدول الأخرى مثل اليابان وكينيا على النمط التقليدي حيث كان الازدهار في ذروته في فترتي الشباب والشيخوخة. الباحثون أشاروا إلى أن العزلة الاجتماعية، والانشغال بالشاشات، وتزايد الضغط المجتمعي لتحقيق الكمالية من أبرز العوامل التي تساهم في تدهور رفاهية الشباب. كما قالت لوري سانتوس، أستاذة علم النفس في جامعة ييل: "الدراسة تلو الأخرى تظهر أن الاتصال الاجتماعي أمر حاسم للسعادة، والشباب يقضون وقتا أقل مع أصدقائهم مما كانوا عليه قبل عقد من الزمان." وطرح الباحثون تساؤلا حول مدى كفاية الاستثمارات في رفاهية الشباب في العديد من الدول. وأضاف الدكتور إميليانا آر. سيمون-توماس، مديرة العلوم في مركز العلوم للخير الأعظم بجامعة كاليفورنيا، أن "رفاهنا يعتمد على رفاهية كل إنسان آخر. نحن لا نستطيع أن نكون سعداء ونضع سياجا حول أنفسنا."0 Comentários 0 Compartilhamentos 31 Visualizações
-
MASHABLE.COMHow to watch Inter Miami vs. Vancouver online for freeCredit: Jeff Vinnick/Getty Images TL;DR: Live stream Inter Miami vs. Vancouver in the 2025 Concacaf Champions Cup for free on YouTube. Access this free live stream from anywhere in the world with ExpressVPN.The Concacaf Champions Cup final beckons for Vancouver. They beat Inter Miami 2-0 in the first leg of their semi-final matchup, but there is still work to do. Any team with the likes of Luis Suárez, Sergio Busquets, and Lionel Messi stands a chance of turning things around. This game isn't over just yet.If you want to watch Inter Miami vs. Vancouver in the Concacaf Champions Cup for free from anywhere in the world, we have all the information you need.When is Inter Miami vs. Vancouver?Inter Miami vs. Vancouver in the Concacaf Champions Cup kicks off at 8 p.m. ET on April 30. This fixture takes place at Chase Stadium.How to watch Inter Miami vs. Vancouver for freeInter Miami vs. Vancouver in the Concacaf Champions Cup is available to live stream for free on YouTube. Mashable Top Stories Stay connected with the hottest stories of the day and the latest entertainment news. Sign up for Mashable's Top Stories newsletter By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! This free live stream is available in most locations around the world, but not in North or Central America. Fortunately, fans in these excluded territories can still access this free live stream with a VPN. These tools can hide your real IP address (digital location) and connect you to a secure server in the UK (or somewhere else with access), meaning you can unblock free live streams of the Concacaf Champions Cup from anywhere in the world.Live stream Inter Miami vs. Vancouver by following these simple steps:Subscribe to a streaming-friendly VPN (like ExpressVPN)Download the app to your device of choice (the best VPNs have apps for Windows, Mac, iOS, Android, Linux, and more)Open up the app and connect to a server in the UK (or somewhere else with access)Visit YouTubeLive stream Inter Miami vs. Vancouver for free Opens in a new window Credit: ExpressVPN ExpressVPN (2-Year Subscription + 4 Months Free) $139 only at ExpressVPN (with money-back guarantee) The best VPNs for streaming are not free, but most do offer free-trials or money-back guarantees. By leveraging these offers, you can watch Inter Miami vs. Vancouver in the Concacaf Champions Cup without actually spending anything. This clearly isn't a long-term solution, but it does give you enough time to stream select Concacaf Champions Cup fixtures before recovering your investment.If you want to retain permanent access to free streaming services from around the world, you'll need a subscription. Fortunately, the best VPN for streaming live sport is on sale for a limited time.What is the best VPN for YouTube?ExpressVPN is the best choice for bypassing geo-restrictions to stream live sport on YouTube, for a number of reasons:Servers in 105 countries including the UKEasy-to-use app available on all major devices including iPhone, Android, Windows, Mac, and moreStrict no-logging policy so your data is secureFast connection speeds free from throttlingUp to eight simultaneous connections30-day money-back guaranteeA two-year subscription to ExpressVPN is on sale for $139 and includes an extra four months for free — 61% off for a limited time. This plan also includes a year of free unlimited cloud backup and a generous 30-day money-back guarantee.Live stream Inter Miami vs. Vancouver in the Concacaf Champions Cup for free with ExpressVPN. Joseph Green Global Shopping Editor Joseph Green is the Global Shopping Editor for Mashable. He covers VPNs, headphones, fitness gear, dating sites, streaming, and shopping events like Black Friday and Prime Day.Joseph is also Executive Editor of Mashable's sister site, AskMen.0 Comentários 0 Compartilhamentos 47 Visualizações
-
ME.PCMAG.COMCrucial P510Pros Fast throughput speeds, in a big-picture senseCompatible with laptops with PCIe 5.0 M.2 slotsPriced well below other PCIe 5.0 SSDs Cons Not a standout in PCMark 10 and 3DMark Storage testingCapacity maxes out at 2TB Crucial P510 Specs Bus Type PCI Express 5.0 Capacity (Tested) 1 Controller Maker Phison Interface (Computer Side) PCI Express Internal Form Factor M.2 Type-2280 Internal or External Internal NAND Type TLC NVMe Support Rated Maximum Sequential Read 11000 Rated Maximum Sequential Write 9500 Terabytes Written (TBW) Rating 600 Warranty Length 5 All Specs The Crucial P510 ($119.99 for 1TB as tested) is Micron's third PCI Express 5.0 internal SSD, after the Crucial T700 and T705, both PCMag Editors' Choice award winners. Unlike these elite speedsters—the T705 is the fastest consumer SSD we have tested to date, with the T700 in the second tier—the P510 is built for mainstream use. Although it has the throughput speed we'd expect from a Gen 5 drive, it conserves power and runs cooler than its brethren, employing a simple heat spreader rather than the sort of massive finned or fanned heatsink that often ships with a PCIe 5.0 SSD. So it's not surprising that its gaming and general-storage test scores are more in line with what we would expect from a high-performance previous-generation PCIe 4.0 SSD like the steadfast Samsung SSD 990 Pro.Design and Specs: DRAM-Less Gen 5 Joins the ChatWhen PCI Express 5.0 SSDs hit the market in the spring of 2023, they promised unheard-of speeds for an internal SSD, but you needed a boutique desktop PC with the latest components or to build such a system from scratch. Today, Gen 5 SSD support is coming to select high-end laptops, but laptops of any stripe have no room for the massive heatsink hardware most other PCIe 5.0 drives require. So manufacturers have gone to exceptional lengths to beat the heat. The Crucial P510 consumes 25% less power than previous Crucial Gen 5 SSDs to support longer battery life. That efficiency also aims to keep the drive cool enough to minimize thermal throttling.The Crucial P510 is a four-lane solid-state drive running the NVMe 2.0 protocol over a PCIe 5.0 bus. This internal SSD comes in the standard M.2 Type-2280 (80mm-long) "gumstick" format. The drive pairs Micron 276-layer G9 3D TLC NAND with Phison's E31T, a Gen 5-optimized DRAM-less controller designed to minimize heat generation.The all-black P510 has all of its silicon on one side (the top), covered with a thin heat spreader. It can fit easily into laptops and even the PlayStation 5. It lacks—and does not require—the massive heatsinks bundled with many Gen 5 drives or offered as an option. Micron says it will offer a full-fledged heatsink for the P510 later, but unlike with elite PCI Express SSDs, it isn't critical.(Credit: Joseph Maldonado)System Requirements: Going MainstreamPCIe 5.0 SSDs, even mainstream models such as the P510, promise a major throughput speed boost over PCIe 4.0 drives, but you can take full advantage of it only if you have recent hardware that supports the standard. Only the latest boutique desktops and a few recent laptops are likely to be PCIe 5.0-ready off the shelf, so you may have to build your own PC from the ground up or update an existing system to gain the connectivity required. You'll need an Intel 12th Gen or later Core CPU with a motherboard based on Intel's Z690 or Z790 chipset or later; or a Ryzen 7000 or 9000 processor with an AM5 motherboard built around an X670, X670E, or B650E chipset, or later ones.Now, an important point: Just because you have one of those chipsets doesn't guarantee that the motherboard maker actually implemented a PCIe 5.0-capable M.2 SSD slot or slots. That's up to the board maker, so check your system's or motherboard's specs and documentation to make sure you actually have such a slot before investing in one of these drives. Some boards have PCIe 5.0 expansion slots for graphics cards and other PCI Express cards, but you need a PCIe 5.0-capable M.2 slot, specifically.Price and Storage Capacity: A New Reasonably Priced Gen 5 NicheThe Crucial P510 is available in 1TB and 2TB versions; you can see the list prices for each in the chart below.It is priced lower than other Gen 5 SSDs we have reviewed, and a little above the high-end PCI Express 4 drives we compare it with in the performance section below. The P510's sequential read and write rated speeds are highest at the 1TB level—its throughput speeds are most similar to the first generation of PCI Express 5 SSDs and have since been surpassed by more recent Gen 5 drives such as the Crucial T705 and Corsair MP700 Pro. As for durability, expressed as lifetime write capacity in total terabytes written (TBW), the P510 matches the Crucial T700 and T705 in the capacities they share. Its durability rating is a notch below the Corsair MP700 Pro, the ADATA Legend 970, and the Aorus 10000, which are rated at 700TBW for 1TB and 1,400TBW for 2TB. The Seagate FireCuda 540 is the reigning Gen 5 durability champ, with ratings of 1,000TBW for the 1TB stick and 2,000TBW for 2TB.The terabytes-written spec is a manufacturer's estimate of how much data can be written to a drive before some cells begin to fail and get taken out of service. Micron warranties the P510 for five years or until you hit the rated TBW figure in data writes, whichever comes first. But the drive's durability rating is such that unless you're writing unusually large amounts of data to the SSD, it's a safe bet that the P510 will last the full warranty period and well beyond.Recommended by Our EditorsPerformance Testing: Spot-On Sequential Throughput, But Otherwise Unremarkable In benchmarking the P510, we used our latest testbed PC, designed specifically for benchmarking PCIe 5.0 M.2 SSDs. It is built around an ASRock X670E Taichi motherboard with an AMD X670 chipset, 32GB of DDR5 memory, one PCIe 5.0 x4 M.2 slot (with lanes that have direct access to the CPU), and three PCIe 4.0 slots. The system sports an AMD Ryzen 9 7900 CPU using an AMD stock cooler; a GeForce RTX 2070 Super graphics card with 8GB of GDDR6 SDRAM; and a Thermaltake Toughpower GF1 Snow 750-watt power supply. The boot drive is an ADATA Legend 850 PCIe 4.0 SSD. (The reviewed SSD is tested as a secondary data drive.) The motherboard employs an air-cooled (fan-based) heatsink over the PCIe 5.0 M.2 slot that can be placed over the tested SSD, as I did when benchmarking the P510.We put the P510 through our usual internal solid-state drive benchmarks: Crystal DiskMark 6.0, UL's PCMark 10 Storage, and UL's 3DMark Storage benchmark. The last measures a drive's performance in several gaming-related load and launch tasks. Among the comparison drives seen in the tables below, I included not only most of the Gen 5 SSDs we have reviewed, but three of the fastest PCI Express 4.0 SSDs we have come across: the WD Black SN850X, the Samsung SSD 990 Pro, and Micron's own Crucial T500.Crystal DiskMark's sequential speed tests provide a traditional measure of drive throughput, simulating best-case, straight-line transfers of large files. We use this test to determine if our tested speeds align with the manufacturer's rated speeds. The P510 effectively matched its rated sequential read and write speeds; both scores were within 1% of their ratings. Sequential read speed was a cut above the earliest PCIe 5.0 SSDs we tested, but well short of the Crucial T705 and below the second tier of Gen 5 speedsters; write speed was the lowest of the PCIe 5 SSDs in our comparison group. The P510's read and write speeds were considerably faster than any of the PCIe 4.0 SSDs we've reviewed, which have sequential read and write speed ratings of up to 7,500MBps and 7,000MBps, respectively.Crystal DiskMark also measures a drive's 4K (small-file) read and write speeds; while the P510's 4K read speed was very similar to most of our comparison drives (both PCIe 4.0 and 5.0), its write score was the highest (by a hair) among a close range of scores of all the Gen 5 drives and well above the Gen 4 SSDs with which we compared it. Good 4K write performance is especially important for an SSD used as a boot drive, though we test them as secondary drives.As impressive as the raw speed of the P510 and other PCI Express 5.0 drives is, it's of little use if your SSD can't quickly perform the tasks you need it for. The PCMark 10 Storage Overall test measures a drive's speed in performing various routine tasks such as launching Windows, loading games and creative apps, and copying both small and large files. The P510's PCMark Overall score was the lowest of the Gen 5 SSDs we have reviewed, about 10% lower than the next-closest Gen 5 SSD, the Seagate FireCuda 540. Compared with the Gen 4 SSDs in our comparison group, the P510's score was better than the WD SN850X but worse than the Crucial T500 and the Samsung SSD 990 Pro.The PCMark 10 Overall score is an aggregate of the results of individual tests, which consist of various simulated system tasks, or "traces." The P510's scores on these tests were underwhelming relative to our comparison drives. It was in the middle of the pack in Windows loading, but fared worse in our game-launching tests, with low scores (among PCIe 4.0 as well as PCIe 5.0 SSDs) for Battlefield 5 and Call of Duty: Black Ops 4, while coming in second to last at Overwatch, ahead of the Samsung SSD Pro 990. In Adobe program launching, the P510 had the lowest score in Premiere Pro and tied for the second-lowest score on the Photoshop launching test with the Crucial T500, while edging out the WD SN850X. In the ISO Copy trace, although its score was below the PCIe 5 drives in our comparison group, it easily beat the Gen 4 SSDs. The 3DMark Storage benchmark tests an SSD's proficiency in performing various gaming-related tasks. In it, the P510 was in the thick of a close pack of four SSDs that trailed the T705 by a large margin.Based on its benchmark scores, the Crucial P510 is best for straight-line file transfers, copying, archiving, and accessing data. It's a mainstream SSD, so we wouldn't expect it to perform like the top-end Gen 5 speedsters we've reviewed, and indeed, its performance was more in line with some of the elite PCIe 4.0 sticks we compared it with.0 Comentários 0 Compartilhamentos 45 Visualizações