Bitmain Antminer Asic Miner:Bitmain Antminer Z15,Bitmain Antminer Z9 Mini,Bitmain Antminer Z9,Bitmain Antminer Z11
Bitmain is the world's leading digital currency mining machine manufacturer. Its brand ANTMINER has maintained a long-term technological and market dominance in the industry, with customers covering more than 100 countries and regions. The company has subsidiaries in China, the United States, Singapore, Malaysia, Kazakhstan and other places.
Bitmain has a unique computing power efficiency ratio technology to provide the global blockchain network with outstanding computing power infrastructure and solutions. Since its establishment in 2013, ANTMINER BTC mining machine single computing power has increased by three orders of magnitude, while computing power efficiency ratio has decreased by two orders of magnitude. Bitmain's vision is to make the digital world a better place for mankind.
Bitmain Antminer Asic Miner,Z15 bitmain antminer,Z11 Antminer Bitmain,zcash miner,zec mining machine Shenzhen YLHM Technology Co., Ltd. , https://www.apgelectrical.com
GPU Computing Goes to AI and VR Nvidia BIRTV Media Communication Highlights
On August 23rd, the Beijing International Radio, Film and Television Exhibition (BIRTV2017) kicked off at the China International Exhibition Center (Old Hall). NVIDIA participated in the exhibition to showcase how its GPU rendering and VR technologies could be applied in media, entertainment, digital content creation, and related workflows such as architectural design, automotive R&D, and design. I was fortunate enough to be invited to visit NVIDIA's booth to experience the latest graphics and video technology and listen to the fascinating presentations from their staff. Through this, I gained valuable insights into the latest developments in GPU technology.
Keynote speakers at the media conference included He Qiqing, Director of Business Development at NVIDIA China; Zhang Wu, NVIDIA’s China Virtual Reality Business Development and Sales Manager; and Zhou Xijian, NVIDIA’s China Virtual Reality Business Development and Sales Manager.
Their talks covered the development and application of GPU-based artificial intelligence technology (with a particular emphasis on its use in the media field), GPU 360-degree high-definition video technology, and NVIDIA’s virtual reality aided design tool – Holodeck (provisionally translated as Holographic Deck).
First, the development and application of GPU-based artificial intelligence technology:
Mr. He primarily shared some of the latest updates NVIDIA had announced at the recent SIGGRAPH 2017 in the U.S.
The last two or three decades have seen major technological revolutions, including the advent of the internet, the rise of mobile internet, and the potential future impact of artificial intelligence. We’ve already seen the outcomes of the first two transformations, and now we’re beginning to implement large-scale data calculations, thanks to the increasing computational power of computers. Technologies represented by deep learning neural networks are gradually being applied in daily life, such as intelligent translation, healthcare, and smart cities.
According to some data analysis, more than 3,000 companies worldwide are actively investing in the development of artificial intelligence. By 2020, the proportion of robots in customer service is expected to reach 85%. At that time, robots will no longer just handle consultations; they’ll also be able to manage sales, shopping, or act as shopping guides.
Specifically, in the media field, the scenes achievable through artificial intelligence focus on both visual and auditory aspects. For instance, after learning the styles of paintings by ancient or modern artists, the neural network can mimic this artistic style. By inputting a scene, it can generate corresponding styles of paintings, such as converting camera-captured images into Van Gogh masterpieces.
Media advertising is also a suitable application scenario. It can identify and analyze the degree of trademark recognition and specific durations in various types of advertisements, determining the value and rewards of advertising. In fact, iQIYI has used related technologies to rapidly review video content, including video images, titles, tags, etc., achieving the speed of going online within 2 minutes—something humans cannot accomplish so efficiently. With current AI recognition speeds, real-time recognition of various objects and characters in 30-frame SD video can be achieved.
In animation and television post-production, where a lot of digital content is required, scenes based on styles can be synthesized using existing materials. For example, a movie might need an 80s-style scene, allowing the computer to learn old photos and then use the material to generate a nostalgic scene. Similarly, the CG screens used in a large number of movies previously captured actors' expressions and movements through complex devices. For example, to create facial actions, many sensors are attached to the actor's face to capture. However, one difficulty lies in capturing the eyes since sensors cannot be placed there. Capturing the tongue is also challenging because there are no sensors attached to the tongue.
To achieve a vivid effect, special animation effects are needed for the eyes and tongue. Now, NVIDIA and Remedy have collaborated to record videos showing changes in human facial expressions and trained neural networks through video texts, enabling CG characters to speak to everyone with expressions, reproducing tongues, facial muscles, and eyes, significantly reducing the workload in past anime.
There are 4K and Blu-ray movies that film and television enthusiasts love. Although it is relatively easy to obtain 4K TVs, the current 4K film source is still relatively rare. BOE is using artificial intelligence to vigorously develop solutions with ultra-high resolution, capable of converting standard definition or HD video into 4K content. Here, the details in the video are complemented by deep learning technology, and the actual details such as the special effects of the hair are guessed based on various materials obtained through learning.
In terms of voice, it is mainly customer service and voice assistants. Most companies currently use NVIDIA GPUs for speech recognition training to help improve the accuracy of text and speech recognition. Artificial intelligence can make accurate recommendations based on the user's precise portrait. In terms of consumption, artificial intelligence technology can help enterprises quickly understand user preferences and get users' accurate portraits based on the text and pictures sent by users on social platforms.
Second, GPU 360-degree high-definition video technology:
Teacher Zhang mainly introduced some tools provided by NVIDIA in the field of VR. These tools can help some content developers use GPUs to produce better VR content faster and more efficiently. The VRWorks SDK brings realism to VR in four ways: 360 video, image rendering, sound effects, physical properties, and tactile sensations. This time the focus is on the video capture part.
The so-called physical characteristics and tactile sensations are mainly the feelings that simulating real-world objects bring us. For example, when a bottle of water is reversed, the water surface will change due to gravity, falling to the ground will make a sound, etc. These gravity, material, sound, and force feedback when touched can use the GPU and algorithm to simulate, letting you feel the same or similar feelings.
Then the sound effects, VRWorks Audio is devoted to simulating the sound of the real environment, embodying that the user can feel the distance and distance of the environment and sound from the sound.
Image rendering. The latest rendering technology is mainly to increase the speed of development. Things that would have taken months to complete may now take only a few weeks.
The focus of video capture is mainly applied to VR video broadcast or 360 degree panoramic video. NVIDIA is working with a panoramic camera manufacturer, where vendors can shoot images and use NVIDIA's SDK for video editing. What they achieved last year was a 2D panoramic video. This year it is a binocular stereoscopic panoramic video, which shows that the computing power has greatly improved.
Features of the VRWorks 360 Video SDK (free download from the official website): (1) Supports real-time & offline video stitching; (2) Mono and Stereo solutions are supported; (3) Up to 32 channels of 4K video input are supported.
Teacher Zhang gave an example to illustrate the rendering capability of NVIDIA GPU: There is a train model, including 150 million faces, using the previous generation GPU, two M6000, can only render 5 million faces, equivalent to only see the first half of the locomotive. Now with the P6000, and then added VR SLI, Single Pass Stereo, the overall performance has been improved by 9 times. From the previous time, 5 million faces could be rendered at the same time and become 45 million faces. It's equivalent to seeing the train from the previous locomotive. Going further, we added the function of Occlusion Culling. The entire performance jumped directly from the first 5 million faces to 150 million faces.
Such a rendering ability is very helpful for CAD design companies, which reflects the computing power and special rendering acceleration of NVIDIA GPUs.
III. Virtual Reality Aided Design Tool - Holodeck (Provisional Translation as Holographic Deck)
Mr. Zhou introduced VR-aided design tool - Holodeck (provisional translation to holographic deck). The holodeck first appeared in Star Trek movies. The later Iron Man and Prometheus also had similar concepts. Of course, this is not a sci-fi device in the movie, but it is similar to virtual reality. Visual effect.
The realization of the holodeck is mainly the use of computers to simulate the physical effects of the entire scene, so it requires a huge amount of computing power. Just like a few decades ago, games can only be pixel-size images, then 3D games, and now VR games are becoming more and more sophisticated and more and more complete.
The holodeck is to provide realistic feedback, touch an object to provide force feedback, see the scene, hear the sound. You may also need to use artificial intelligence to identify what you are saying, provide 360-degree video that is generated in real time, and present a realistic world at various sensory levels.
NVIDIA's Holodeck is designed according to this philosophy. They provided HTC Vive at the booth to experience this application, VR also experienced it, and later explained the specific effect.
Holodeck has imported a super running project file (real sports car manufacturer's design project file), it has a total of 50 million polygons, fine enough, and Holodeck can generate real-time running physical lighting effects, you can be around Moving through each detail, you can see through the interior, unfolding the parts, as if there was a super vision running in front of you.
Holodeck's visual effects are based on real physical models. It also provides relevant virtual interactions. Two people in different places can interact and communicate in virtual scenes. Or use this virtual scene to train AI robots. It is trained in VR. Gain some operational ability, and then directly import the data into the real robot, it can realize the same operation immediately in the same scene.
There is also physical feedback. When the controller touches a virtual object, it will respond accordingly.
One key point of Holodeck is gaze point rendering. This is the focus rendering technique that VR has previously described in the news. Because any area of the current VR scene is clear, but in fact our glasses have a focusing process, there is a fuzzy place near the distance, which is also reflected in part of the visual depth, and the current VR can not be performed. This requires eye-tracking technology, NVIDIA's point-of-view rendering, and VR display technology.
Fourth, media questions related
We first asked: What are the outstanding features of GPU-based artificial intelligence compared to other similar products?
On this point, what Mr. He's answer can mainly be summarized as two points: First, GPU programming advantages, including a variety of tool chains, related libraries and ecological chain system, because the earliest researchers are using GPU to do this research, here has accumulated the advantages of the tool chain, and GPU tools have developed mature, developers are accustomed to using these tools. The second is the development speed of the neural network, and the related architecture changes very quickly. Compared with other similar products, the NVIDIA GPU has a wide range of applications and can adapt itself to the changes in the architecture and provide better versatility.
Then there is the issue of 360-degree video rendering SDK cooperation. This is basically NVIDIA provides open tools, software can be downloaded.
Then there is the problem of panoramic video stitching: the panoramic camera has multiple shots. Assuming someone moves from the edge of one lens to the edge of another shot, the person will deform or suddenly disappear to appear suddenly during the exercise. Solve this problem?
Here, Teacher Zhang said that the part involving the underlying algorithm has no way to provide an exact answer, mainly related to the camera's time synchronization. This does not happen if there is no error in multiple frame time recordings.
With regard to Holodeck's multi-person collaboration problem, Chow said that the collaboration in different places requires network conditions to be allowed. There is a powerful NVIDIA machine capable of accelerating locally.
Then came the actual use of Holodeck. They stated that Holodeck is a VR tool that is applied to the actual engineering design environment and is a universal platform that will serve the current manufacturing and design fields. It mainly solves the problem of real-time operation and multi-person collaboration in VR. The current development version is aimed at the automotive industry. But in reality the future can be applied to more fields that have such needs, such as the construction industry. In short, this platform is not only created for the automotive industry, but is intended to be applied to more types of scenarios to allow more industry sectors to benefit from it.
V. VR experience Holodeck
Experience Holodeck's media counterparts
After the question and answer ended, VR entered the Holodeck demonstration room. The staff helped me to wear the Vive head display and provided me with instructions during the experience.
It should be noted that Holodeck also provided an interactive brush tool. VR used it to draw a 3D cage suspended in the air. The effect was observed from multiple angles. It did have a good three-dimensional effect (not to draw a plane A three-dimensional image, but the image itself is physical, but because this is drawn in the virtual space, there is no way to take pictures.
Features of other experiences include momentary movement, which can be moved around to the sports car to observe from multiple angles (since the venue is limited, it is impossible to directly run over); modify the color of the sports car; perspective mirror (view internal structure, is a mirror held in the hand, which perspective?); Sports car decomposition effect (all sports car parts are separated, suspended in the air); scene switching (view sports car under different circumstances visual effects, body light changes are very like).
From an experience point of view, although the Vive heads can still see the pixels, the whole scene is very realistic and the pixels are not clear enough to be forgotten quickly; then the overall tracking is very smooth and there is no place for people to feel delayed. There are differences between long-distance movement and operation and reality in VR, but it will be able to get started quickly; but I can also feel some visual differences in the VR scene, which is the issue of the gaze point rendering mentioned earlier, because the scene is not clear The degree of change, so even though it is a binocular stereoscopic vision, it still feels that the visual depth is relatively poor. When drawing a cage, it is thought that the two lines drawn are on the same plane. As a result, the result turns to see that in actuality, the front and back cannot be seen positively.
In the end, such virtual reality can already be visually confusing. For example, when a sports car part spreads out, one of the parts just snaps at me quickly, the subconscious hides, and after 20 minutes of experience, he takes off his head and maybe even more. To adapt to virtual reality, I feel that the real world looks a bit unrealistic (of course, this also shows that the simulated world is actually a little bit different from reality in details).