2019年8月16日星期五

What is TDP and why should you care about it?

AMD FX CPU

Should you be looking at various parts to build a PC with, or to upgraded a specific component, you may have come across a term on your travels: TDP. But what exactly is TDP and why should you even care about the value provided by a manufacturer? We break everything down for you.

So, what is TDP?

TDP stands for Thermal Design Power, and is used to measure the amount of heat a component is expected to output when under load. For example, a CPU may have a TDP of 90W, and therefore is expected to output 90W worth of heat when in use. It can cause for confusion when shopping around for new hardware as some may take the TDP value and design a PC build around that, taking note of the watt usage. But this isn't entirely accurate, nor is it completely wrong.
Our 90W TDP CPU example doesn't mean the processor will need 90W of power from the power supply, even though thermal design power is actually measured in watts. Instead of showcasing what the component will require as raw input, manufacturers use TDP as a nominal value for cooling systems to be designed around. It's also extremely rare you will ever hit the TDP of a CPU or GPU unless you rely on intensive applications and processes.
The higher the TDP the more cooling will be required, be it in passive technologies, fan-based coolers or liquid platforms. You'll not be able to keep a 220W AMD FX-9590 with a laptop CPU cooler, for example.

TDP ≠ power draw?

AMD Radeon GPU
Not quite, no. TDP doesn't equate to how much power will be drawn by the component in question, but that doesn't mean you can't use the value provided as an estimation. The reading itself is based on power so it can actually prove useful when looking at what you will need to provide enough juice. Generally, a component with a lower TDP will need draw less electricity from your power supply.
Actual readings listed by manufacturers can vary as well, depending on their own findings. So while the value of TDP may not exactly reflect how much power a part will draw in a system, it does provide solid grounds to design a cooling system around it, as well as a rough idea as to how much output a power supply (PSU) will need to have. To be safe, we usually recommend a quality brand PSU of 500W for a PC with a single GPU.

Conclusion

If that still has you flabbergasted with TDP, it's essentially a reading that helps determine the power efficiency and performance of a component. Using a CPU as an example, one with higher TDP will usually provide more in terms of performance, but will draw more electricity from the PSU. TDP is not — however — a direct measure of how much power a component will draw, but it can be a good indicator.
Be sure available cooling you have at hand is more than enough to keep components cool, especially when it comes to the GPU and CPU.
So What's the laptop cooling construction likes, defined what's your TDP can be.
For small laptop, how to solved the heat dissipation issue to make cooling system to reach the good TDP, it's a big problem.
Now GPD was find the balance, they launched the smallest ultrabook GPD P2 MAX, just 8.9" but use the 15w TDP on m3-8100y, that's really awesome.
Also it's has 16GB DDR3+512GB SSD, but only $700, now live on INDIEGOGO.
https://igg.me/at/p2max/x#/



2019年8月10日星期六

What is PPI: Pixels Per Inch, Display Resolution

PPI stands for Pixels Per Inch and is a metric typically used to describe the pixel density (sharpness) for all sorts of displays, including cameras, computers, mobile devices, etc… It is important to understand what it really means in a world where visual computing and visual quality has increased exponentially over the past decade, and where PPI has become a prime marketing tool.
1920-pixels-vs-3840-pixels
PPI is an interesting metric, but it cannot be used by itself as a sharpness benchmark because the distance between our eyes and the display is as important as the pixel density itself. If you bring your screen closer to your eyes, you will see the pixels. If you move the device further away, the additional pixel density may not be useful at all, because it won’t be perceptible. Smartphones are used much closer to your eyes than tablets. Computer monitors are a little further away and TV, cinema screen etc are even farther. Because of that, they require different PPIs to achieve the same perceived sharpness from your point of view.
What 20/20 vision really means?

Snellen_chart_partialLet’s look at how human visual acuity is measured: we have all heard of “20/20 vision”, and it would make sense to think that it means “perfect” or “maximum” vision, but that’s not true at all. The 20/20 vision test comes from the Snellen chart (on the right), which was invented in 1860 as a mean to measure visual acuity for medical purpose. This is important because Snellen was trying to spot low-vision, which is a medical problem. No medical patient has ever complained of having above-average visual acuity.
20/20 vision actually means that you have “normal” vision, which assumes that “most” human can achieve reading all the letters in the chart at a distance of 20 feet (about 6 meters or yards). In short, 20/20 really means “average” vision. People with poor vision will only be able to read the top row of letters at 20 feet, while most of the population can read them at a much greater distance.

The 300 PPI “myth”

The 300 PPI limit is just marketing.
The 300 PPI limit is just marketing.
You may have heard many times that the human eye cannot distinguish details beyond 300 PPI. We have heard that for years when discussing Print work, and recently, the launch of the iPhone 4 moved that same myth into the mobile world.
The previous paragraph is key to understanding the 300 PPI claim that was made when the iPhone 4 came to the market. Apple’s CEO Steve Jobs implied on stage that the human eye could not perceive sharpness beyond 300 PPI in the context of Smartphone usage.
Steve Jobs considered that you are holding your phone/tablet at 10-12 inches from your eyes. There was a lot of controversy, but astronomer Phil Bait wrote a good article saying that it depends on how you look at it. He has a less polarizing opinion than many of the articles that came out at the time.
Mr. Jobs’ 300 PPI claim maybe remotely true only if you use the 20/20 vision as a reference. But the (big) caveat is that 20/20 vision does not represent “perfect” vision, not by a long shot. Real human vision limits are actually much higher than that – possibly closer to 900 PPI or more depending on who you talk to. Research from Sun Microsystems estimated the limit to be at least 2X what 20/20 vision is (pdf link), and Sharp thinks that humans can see up to 1000 PPI (pdf link).

No scientific consensus, but research points to higher sharpness limits

The limits of human vision are still under intense research, but just like other human physical activities, there is such an upper limit that would apply for the large majority of the population. But first, it’s necessary to understand how visual acuity is measured from your eye’s point of view. The most common metric that we have seen is the “arc minute” or “minute of arc”.
Arc minutes measure the size of things we see in terms of visual angle. This is convenient because it allows to express the size of things as perceived by our eyes, without regards to where they are in space. Some have proposed using a metric that may seem easier to grasp: pixels per (visual) degree. In that metric, 20/20 vision would be more or less equivalent to 58 pixels per degree of vision. Sony cites that NHK research has measured human visual acuity to 312 pixels per degree while research from NASA mentions 0.5-1.0 arc minutes
While there is no definitive answer to the question, most research points to the fact that 300 PPI does not represent the human visual acuity limit in the context of Smartphonee displays.

How Higher PPI may benefit you (or not)

Since the initial emergence of high DPI displays with the iPhone 4, we know from experience that the human eye can see beyond those 300 DPIs. How far we will go remains to be seen, and we would agree that there is a point of diminishing returns.
In the end, it depends on your own vision: in our experience, most people who own Smartphones that have a PPI higher than 300 can perceive that there is a difference in sharpness. That is especially true when looking at nature scene photos or simply text and icons.
What’s important is that you understand that perceiving details beyond 300 PPI is not some kind of super-human feat, a gift of nature to a few of us. Chances are that you are able to see much more detail than what the 20/20 chart was intended to measure in 1860.
Finally, the level of details that a display can output is not only about what we can pay attention to. Japan’s NHK Researchers point out that smaller pixels and more details are making the overall image look much more real. That’s probably why many people say that 4K TV seems more “real” than 3D TV.
Anyway When this concept was mentioned out, the means it's could be exist.
Now GPD adhering to the Jobs's concept, on their latest products the smallest ultrbook GPD P2 Max, they made the PPI to 340, would be keep user experience to top.
8.9 inch size but with active cooling system to handle the M3-8100y with 16GB RAM and 512GB SSD, but just $700.
Now it's live on INDIEGOGO.

2019年8月2日星期五

USB Type-C Explained: What is USB-C and Why You’ll Want it

USB-C is the emerging standard for charging and transferring data. Right now, it’s included in devices like the newest laptops, phones, and tablets and—given time—it’ll spread to pretty much everything that currently uses the older, larger USB connector.
USB-C features a new, smaller connector shape that’s reversible so it’s easier to plug in. USB-C cables can carry significantly more power, so they can be used to charge larger devices like laptops. They also offer up to double the transfer speed of USB 3 at 10 Gbps. While connectors are not backwards compatible, the standards are, so adapters can be used with older devices.  
Though the specifications for USB-C were first published in 2014, it’s really just in the last year that the technology has caught on. It’s now shaping up to be a real replacement for not only older USB standards, but also other standards like Thunderbolt and DisplayPort. Testing is even in the works to deliver a new USB audio standard using USB-C as a potential replacement for the 3.5mm audio jack. USB-C is closely intertwined with other new standards, as well—like USB 3.1 for faster speeds and USB Power Delivery for improved power-delivery over USB connections.
Volume 0%
 

Type-C Features a New Connector Shape

USB Type-C has a new, tiny physical connector—roughly the size of a micro USB connector. The USB-C connector itself can support various exciting new USB standard like USB 3.1 and USB power delivery (USB PD).
The standard USB connector you’re most familiar with is USB Type-A. Even as we’ve moved from USB 1 to USB 2 and on to modern USB 3 devices, that connector has stayed the same. It’s as massive as ever, and it only plugs in one way (which is obviously never the way you try to plug it in the first time). But as devices became smaller and thinner, those massive USB ports just didn’t fit. This gave rise to lots of other USB connector shapes like the “micro” and “mini” connectors.
This awkward collection of differently-shaped connectors for different-size devices is finally coming to a close. USB Type-C offers a new connector standard that’s very small. It’s about a third the size of an old USB Type-A plug. This is a single connector standard that every device should be able to use. You’ll just need a single cable, whether you’re connecting an external hard drive to your laptop or charging your smartphone from a USB charger. That one tiny connector is small enough to fit into a super-thin mobile device, but also powerful enough to connect all the peripherals you want to your laptop. The cable itself has USB Type-C connectors at both ends—it’s all one connector.
USB-C provides plenty to like. It’s reversible, so you’ll no longer have to flip the connector around a minimum of three times looking for the correct orientation. It’s a single USB connector shape that all devices should adopt, so you won’t have to keep loads of different USB cables with different connector shapes for your various devices. And you’ll have no more massive ports taking up an unnecessary amount of room on ever-thinner devices.
USB Type-C ports can also support a variety of different protocols using “alternate modes,” which allows you to have adapters that can output HDMI, VGA, DisplayPort, or other types of connections from that single USB port. Apple’s USB-C Digital Multiport Adapter is a good example of this, offering an adapter that allows you to connect an HDMI, VGA, larger USB Type-A connectors, and smaller USB Type-C connector via a single port. The mess of USB, HDMI, DisplayPort, VGA, and power ports on typical laptops can be streamlined into a single type of port.

USB-C, USB PD, and Power Delivery

The USB PD specification is also closely intertwined with USB Type-C. Currently, a USB 2.0 connection provides up to 2.5 watts of power—enough to charge your phone or tablet, but that’s about it. The USB PD specification supported by USB-C ups this power delivery to 100 watts. It’s bi-directional, so a device can either send or receive power. And this power can be transferred at the same time the device is transmitting data across the connection. This kind of power delivery could even let you charge a laptop, which usually requires up to about 60 watts.
Apple's new MacBook and Google's new Chromebook Pixel both use their USB-C ports as their charging ports. USB-C could spell the end of all those proprietary laptop charging cables, with everything charging via a standard USB connection. You could even charge your laptop from one of those portable battery packs you charge your smartphones and other portable devices from today. You could plug your laptop into an external display connected to a power cable, and that external display would charge your laptop as you used it as an external display — all via the one little USB Type-C connection.
There is one catch, though—at least at the moment. Just because a device or cable supports USB-C does necessarily mean it also supports USB PD. So, you’ll need to make sure that the devices and cables you buy support both USB-C and USB PD.

USB-C, USB 3.1, and Transfer Rates

USB 3.1 is a new USB standard. USB 3‘s theoretical bandwidth is 5 Gbps, while USB 3.1’s is 10 Gbps. That’s double the bandwidth—as fast as a first-generation Thunderbolt connector.
USB Type-C isn’t the same thing as USB 3.1, though. USB Type-C is just a connector shape, and the underlying technology could just be USB 2 or USB 3.0. In fact, Nokia’s N1 Android tablet uses a USB Type-C connector, but underneath it’s all USB 2.0—not even USB 3.0. However, these technologies are closely related. When buying devices, you’ll just need to keep your eye on the details and make sure you’re buying devices (and cables) that support USB 3.1.

Backwards Compatability

The physical USB-C connector isn’t backwards compatible, but the underlying USB standard is. You can’t plug older USB devices into a modern, tiny USB-C port, nor can you connect a USB-C connector into an older, larger USB port. But that doesn’t mean you have to discard all your old peripherals. USB 3.1 is still backwards-compatible with older versions of USB, so you just need a physical adapter with a USB-C connector on one end and a larger, older-style USB port on the other end. You can then plug your older devices directly into a USB Type-C port.
Realistically, many computers will have both USB Type-C ports and larger USB Type-A ports for the immediate future—like Google’s Chromebook Pixel. You’ll be able to slowly transition from your old devices, getting new peripherals with USB Type-C connectors. Even if you get a computer with only USB Type-C ports, like Apple’s new MacBook, adapters and hubs will fill the gap.

USB Type-C is a worthy upgrade. It’s making waves on the newer MacBooks and some mobile devices, but it’s not an Apple- or mobile-only technology. As time goes on, USB-C will appear in more and more devices of all types. USB-C may even replace the Lightning connector on Apple’s iPhones and iPads one day. Lightning doesn’t have many advantages over USB Type-C besides being a proprietary standard Apple can charge licensing fees for. Imagine a day when your Android-using friends need a charge and you don’t have to give the sorrowful “Sorry, I’ve just got an iPhone charger” line!

Now Smallest Ultrabook GPD P2 Max has full function USB-C.

That means you might just need a HUB, then you can complete all you want. Charging, data transporting, RJ45, 4K decoding, anything just you can imaging.
Now such smallest ultrabook GPD P2 Max is crowdfunding on INDIEGOGO, 16GB+512GB SSD just $700, go to get it your own ultrabook!

2019年7月27日星期六

ARM vs X86 – Key differences explained


ARM vs X86 - Key differences explained
Android supports 3 different processor architectures: ARM, Intel and MIPS. The most popular and ubiquitous of these three is, without a doubt, ARM. Intel is well known primarily because of its popularity in the desktop and server markets, however on mobile it has had less of an impact. MIPS has a long heritage, and lots of success, for both 32- and 64-bit solutions in a variety of embedded spaces, however it is currently the least popular of the three CPU designs for Android.
In short, ARM is the current winner and Intel is its big brand rival. So what is the difference between an ARM processor and an Intel processor? Why is ARM the more popular choice? And does it matter what CPU is in your smartphone or tablet?
The CPU
The Central Processing Unit (CPU) is the “brains” of your device. Its job is to execute a sequence of instructions to control the hardware on your device (the display, the touch screen, the cellular modem etc.) to turn it from a lump of plastic and metal into a vibrant smartphone or tablet. Mobile devices are complex things and these CPUs need to execute millions of instructions to make them behave as we expect. The speed and power efficiency of these CPUs is critical. The speed affects the user experience, while the efficiency affects the battery life. The perfect mobile device is one that has high performance and low power usage.
Intel is the industry leader in desktops and servers.
This is why the choice of CPU is important. A power hungry, hog of a CPU will drain your battery fast, however an elegant and efficient CPU will give you both performance and battery life. At the highest level, the first difference between an ARM CPU and an Intel CPU is that the former is RISC (Reduced Instruction Set Computing) and the latter is CISC (Complex Instruction Set Computing). In simplified (and I emphasize , “simplified”) layman’s terms, RISC instructions sets are smaller, more atomic, while CISC instruction sets are larger, more complex. By atomic, I mean that each instruction roughly translates to a single operation that the CPU can perform, e.g. add the contents of two registers together. CISC instructions express a single idea, but the CPU will need to execute 3 or 4 more simplified instructions to perform it. For example a CISC CPU can be told to add together two numbers stored in main memory. To do this, the CPU needs to fetch the number from address-1 (one operation), fetch the number from address-2 (second operation), add the two numbers (third operation) and so on.
Intel_CPU_Pentium_4_640_Prescott_bottom
All modern CPUs use a concept known as microcode, an internal instruction set of the CPU that describes atomic operations that the CPU can perform. It is these smaller (micro) operations that the CPU actually executes. On RISC processors, the instruction set operations and the microcode operations are very close. On CISC, the complex instructions need to be translated into smaller microcode ops (as described above with the CISC add example). This means that the instruction decoder (the bit that works out what the CPU actually needs to do) is much simpler on a RISC processor, and simpler means less power and greater efficiency.

Fabs

The next major difference between an ARM processor and an Intel processor is that ARM has only ever designed power efficient processors. Its raison d’être is to design low-power usage processors. That is its expertise. However Intel’s expertise is to design super high performance desktop and server processors. And it has done a good job. Intel is the industry leader in desktops and servers. Every PC, laptop and server I have owned (with the exception of one) in the last 20 years had an Intel processor in it. However to get into mobile, Intel is using the same CISC instruction set architecture (ISA) that it uses on the desktop, but it is trying to shoehorn it into smaller processors, suitable for mobile devices.
When it comes to 64-bit computing, there are also some significant differences between ARM and Intel.
The average Intel i7 processor produces around 45W of heat. The average ARM based smartphone SoC (including the GPU) has a maximum instantaneous peak power of around 3W, some 15 times less than an Intel i7. Now Intel is a big company and they have lots of smart people working there. Its latest Atom processors have similar thermal designs as ARM based processors, however to do that it has had to use the latest 22m fabrication process. In general the lower the fabrication nanometer number, the better the energy efficiency. ARM processors have similar thermal properties at higher nanometer fabrication processes. For example the Qualcomm Snapdragon 805 uses a 28nm process.
28nm-wafer

64-bits

When it comes to 64-bit computing, there are also some significant differences between ARM and Intel. Did you know that Intel didn’t even invent the 64-bit version of its x86 instruction set. Known as x86-64 (or sometimes just x64), the instruction set was actually designed by AMD. The story goes like this, Intel wanted to move into 64-bit computing, but it knew that to take its current 32-bit x86 architecture and make a 64-bit version would be inefficient. So it started a new 64-bit processor project called IA64. This eventually produced the Itanium range of processors. In the meantime AMD knew it wouldn’t be able to produce IA64 compatible processors, so it went ahead and extended the x86 design to include 64-bit addressing and 64-bit registers. The resulting architecture, known as AMD64, became the de-facto 64-bit standard for x86 processors.
OLYMPUS DIGITAL CAMERA
The IA64 project was never a big success and today is effectively dead. Intel eventually adopted AMD64. Intel’s current mobile offerings are 64-bit processors using the 64-bit instruction set designed by AMD (with a few minor differences).
As for ARM, the story is a quite different. Seeing the need for 64-bit computing on mobile, ARM announced its ARMv8 64-bit architecture in 2011. It was the culmination of several years of work on the next generation ARM ISA. To create a clean 64-bit implementation, but one based on the existing principles and instruction set, the ARMv8 architecture uses two execution states, AArch32 and AArch64.
Cortex A53 and A57 Performance chart
As the names imply, one is for running 32-bit code and one for 64-bit. The beauty of the ARM design is the processor can seamlessly swap from one mode to the other during its normal execution. The means that the decoder for the 64-bit instructions is a new design that doesn’t need to maintain compatibility with the 32-bit era, yet the processor as a whole remains backwardly compatible.

Heterogeneous Computing

ARM’s big.LITTLE architecture is an innovation that Intel is nowhere near replicating. In big.LITTLE the cores in the CPU don’t need to be of the same type. Traditionally a dual-core or quad-core processor had 2 or 4 cores of the same type. So a dual-core Atom processor has two identical x86-64 cores, both offering the same performance and using the same amount of power. But with big.LITTLE ARM has introduced heterogeneous computing for mobile devices. This means that the cores can be different in terms of performance and power. When the mobile device is not busy, a low-energy core can be used, but when you start a complex game, the high performance cores are used.
ARMv8-A Processors - a single scalable architectue

But here is the magic. When talking about CPU designs there are a bunch of technical design decision that alter the performance and the energy usage of the processor. When an instruction is decoded and prepared for execution the processor (both Intel and ARM) uses a pipeline. That means that each minute aspect of the decoding process is parallelized. So the part to fetch the next instruction from the memory is stage 1, then the type of instruction needs to be examined and decoded- stage 2, then the instruction is actually executed – stage 3, and so on. The beauty of pipelines is that while the first instruction is in stage 2, the next instruction is already in stage 1. When the first instruction is in the execution step (stage 3), the second instruction is now in stage 2 and the third instruction is in stage 1, and so on.
This principle of using more complex logic in the processor for better performance, and less complex logic for high efficiency, doesn't only apply to the instruction pipeline.
To make things even faster these pipelines can be built so that instructions can actually be executed in a different order than in the program. There is some clever logic to work out if the next instruction relies on the result of the instruction ahead of it. Both Intel and ARM have out-of-order-execution logic. But as you can imagine that is some really complex technology. Complex means power hungry. On Intel processors the designers choose to implement out-of-order-execution or not. But with heterogeneous computing that isn’t a problem. The ARM Cortex-A53 uses in-order execution, meaning it uses less power. But the Cortex-A57 uses out-of-order-execution, meaning it is faster but uses more power. In an big.LITTLE processor there can be Cortex-A53 and Cortex-A57 cores, and the cores are used according to the demands being made. You don’t need super fast out-of-order execution to background sync your emails, but you do when playing complex games. So the right core is used at the right time.
think big.LITTLE
This principle of using more complex logic in the processor for better performance, and less complex logic for high efficiency, doesn’t only apply to the instruction pipeline. It equally applies to the floating point unit, to the SIMD logic (i.e. NEON on ARM and SSE/MMX on Intel), and to the way the L1 and L2 caches work. Intel offers one solution per Atom SoC, ARM, through its silicon partners, offers multiple configurations many of which can be implemented simultaneously in the same silicon.

Compatibility

ARM is the current leader in terms of mobile processors. ARM’s partners have shipped 50 billion chips based on its designs, all for mobile and embedded markets. For Android, ARM is the de-facto standard and this leads to a problem for Intel and MIPS. Although Android uses Java as its principle programming language, it also allows programmers to take their existing code (in C or C++, for example) and create apps. These “native” apps are generally compiled for ARM processors and not always for Intel or MIPS. To get around this Intel and MIPS need to use special translation software which converts the ARM instructions into code for their processors. This of course impacts performance. At the moment MIPS and Intel can claim about a 90% compatibility with all the apps available in the Play Store. That figure is probably closer to 100% when dealing with the top 150 apps. On the one-hand that is a good coverage, but on the other hand it shows ARM’s dominance in that the other processor designers need to offer a compatibility layer.

Wrap up

Building a CPU is a complex business. ARM, Intel and MIPS are all working hard to bring the best technology available to mobile devices, however ARM is clearly the leader. With its focus on power efficient processors, its clean 64-bit implementation, its heterogeneous computing, and its role as the de-facto standard for mobile computing, then it looks like ARM is set to remain at the top.

New Device 

Now even laptop is coming to portable age, the device goes small and small, but same time must keep the performance. So GPD was made it, they combine the normal X86 CPU to mobile device with good performance, it's even can run GTA5 with 30FPS.

M3-8100y in 8.9" unit, and 16GB RAM+512GB SSD make sense, GPD P2 MAX now it's just crowdfunding on INDIEGOGO with $700.
Just go for it, 15 days left.

2016年4月9日星期六

GPD WIN - Confirmada y Financiada en INDIEGOGO

GPD WIN Intel Z8550 Win 10 OS Game Console

https://www.indiegogo.com/projects/gpd-win-intel-z8550-win-10-os-game-console/x/13461816#/