- 26 PCIe lanes (i.e. multiple fast M.2 PCIe NVMe, making 5k/4k RAW video previewing possible in real-time)
OK:
- very small performance increase comparing to Haswell two generations ago (Broadwell?)
- DDR4 performance matches XMP DDR3 performance; to see any difference you need to use quad channel configuration
- SGX, the jury is out on this one (double-edged sword, could improve security but also turn into a complete malware mess due to enclave isolation and inability to detect running encrypted botnets once they gain ring 0)
SGX is terrifying. Botnets are not the main problem. Windows (N+1) using SGX to make it hard/impossible to remove the spyware (and other junk)[1] is the problem. As an example, take a "trusted" environment[2] and add SGX as a way of keeping end users out of the TEE.
While it is still early, SGX may be a key battlefield in the War On General Purpose Computing.
The problem that you describe is not an SGX problem, it is an OS trust problem. The current trust model for general purpose computing is such that a processor is not in a position to prevent a malicious OS from controlling which applications run, whether that processor supports SGX or not. The malicious OS may of course use SGX if it wants to ensure that unprivileged code has not been tampered with, but do not equate tamper proofing with keeping end users out. SGX does not strengthen a malicious OS at the expense of user software. On the contrary, it provides computing services directly to an application, removing supervisor code from the TCB of the application. That having been said, I agree that any move to keep users out of their own systems is troubling and that users should be cautious and not take security and trust claims at face value.
Very terrifying indeed. IMHO these days there is far too much freedom being traded away for security, and users are gradually conditioned toward it. The thinking is almost like "How about we put everyone in prison, because some percentage of them will become criminals anyway?"
Back then it was relatively easy to patch your OS --- even if it was proprietary and closed-source --- to make it behave how you wanted to, if you knew what to change (see all the Windows customising forums for an example, and the whole cracking community.) You didn't have to give up proprietary software completely and move to something like Linux. Now it's become much harder, and I feel that the middle ground between fully open-source and fully closed has mostly disappeared as the two communities are gradually distancing from each other.
The ability to use a General Purpose Computer and an IP address (non-NAT!) gives anybody the ability to publish without any gateway or authority being able to control what can be published. This is real power, in the long-term.
It is logical that the people who traditionally hold the power of what information can be published to large audiences - that is, the people that used to control the scarce resources that limited publication - would really like to see all these troubling technologies stuffed back into Pandora's Box and locked away behind the traditional gatekeepers where they won't cause trouble.
It's called the Digital Imprimatur[1], and it has been implemented while everybody else was busy distracted by technical minutia and dreams of stock options. Locking down "true"-root access with SGX (or other hardware tricks like SecureBoot) is one of the last steps. If we fail fight this trend right now... well... what good is a compiler when you can't trust[2] the environment it is running in?
It's "terrifying" but it's also something that's often desired. For example, SGX could enable things like a trusted Bitcoin mixer. Or a trusted telephony provider. Remote code trusting is pretty cool for such scenarios where the other option is "well, sure, you gotta trust the site, but...".
The downside is that it'll be too juicy for DRM and games to pass up, and that's shitty and most likely outweighs the benefit. Perhaps if Intel had limited SGX to server workloads and gave us AVX512 instead.
(I'm guessing on SGX here. I am unaware of any detailed info on how exactly it'll work, how the remote attestation works.)
TDP is up. Power consumption is down. TDP is only a maximum value, and due to Skylake's increased power density, it doesn't correlate 1:1 with power consumption.
I don't know much about this but, regarding DD4, the article says on page 7: "upgrading to DDR4 doesn’t degrade performance from your high end DRAM kit, and you get the added benefit of future upgrades, faster speeds, lower power consumption due to the lower voltage and higher density modules."
My impression was that M.2 is going out of style, at least for desktop - the primary form factor for the Intel 750 etc is an actual PCIe card. The alternative connector if you want a 2.5" drive form factor is U.2 (SFF-8639).
The Intel 750 is officially speaking a consumer product, but it's pretty out there. M.2 is far more popular and is doing a pretty good job of supplanting mSATA and effectively prevented SATA Express from being adopted. U.2 is brand new and is only going to show up on the most expensive consumer products, but M.2 can be found on a lot of motherboards under $100.
You're right, it looks like most of the Skylake launch boards in Anandtech's review are M.2. Somewhat disappointing. I still suspect M.2 is obsolete in the long run, but seemingly not for this generation of Skylake.
There are at least a couple boards I see with a dedicated PCIE 3.0 x4 slot and U.2 support, but they're the highest end and I'm sure not cheap.
You can connect a 2.5" U.2 drive (Intel 750) to M.2, but it's obviously going to reduce performance to M.2 speeds.
M.2 supports the same performance as U.2. Both connectors allow for up to 4 lanes of PCIe. Some lower-cost implementations of M.2 only provide 2 lanes, and it also can provide SATA signals instead of PCIe, but with Intel's desktop chipsets finally getting a bandwidth upgrade everybody is going to be implementing M.2 with four lanes going forward.
Ah, I guess I'm used to the Haswell DMI bus speed still, which severely limited M.2 speed even if it was 3.0 and 4 lanes. I guess that's probably not the case with Skylake.
64GB! Sixty-four! For a desktop/laptop architecture! At 64GB (GiB?) main memory we're getting to the point where you could copy the entire working set of the OS to memory from disk on boot and then simply save to disk periodically. So long as you have a battery/UPS that'd work. Is that a crazy?
I'm currently on 3rd gen, the i7-3537U. My dream machine would be 13/14" laptop w/ 1080p (though 4k-ish wouldn't hurt), Skylake i7, usb-c, hdmi out, 16GB RAM, 256 decent SSD. Have I forgotten anything?
I would say it's very sad that we have only 64GB. We should be allowed to install at least 128GB or 256GB in consumer grade mainboards. Similarly for a developer workstation one has to choose a Xeon to get more cores, but a smaller GPU and twice as many cpu cores in the non-Xeon line would be great because enthusiasts don't use the iGPU anyway. Developers don't need a fast GPU unless they really need one and then any iGPU is too little too slow so most chips should have more cores in place of bigger iGPUs. It's true that your office clerk workflow doesn't benefit from more cores due to legacy software, but software developer workflows benefit a lot when building code or running virtual machines or running (modern) concurrent applications. It's very very sad indeed that AMD is the only x86 vendor who puts more cores on consumer chips. Let's hope Zen will force a change.
We are not consumers. Intel would be foolish to optimize their consumer chips for us. If you want a workstation class machine you buy a Xeon. Adding memory channels is not free, and with 4 channels you get 64GB given current DDR4 capacity.
Don't people use GPUs all the time now because of Netflix and movie streaming? My old laptop's GPU would always kick in when I played something in 1080p.
Cannot upvote enough. At this density, and considering caches will have persistence of days (if not more) in the main memory, random errors become a major issue. ECC should have became a standard in system memory 5 years ago already.
Wait is that true? There is no error correction in system memory? That sounds like a huge waste. Usually with coding not only reliability increases but you can decrease power consumption a lot too (so you're not "fighting the noise" with power only).
Yes, exactly true. Some libraries (like innodb) actually checksum all contents in memory to self-detect when corruption is happening. Most applications and libraries trust system memory too much and can read corrupted data at any given time.
Another fun way to get corrupted memory is to have some data swapped to disk, but have disk corruption, then have the corrupt disk data restored to memory. Bam. Instant invalid data in memory, but not caused by memory.
Many hashes are fast these days and should be used as checksums in more places since ECC isn't as common as it should be (and ECC doesn't detect all errors anyway).
For a while some of the Xeon E3s were the thing to buy, as they were sold at lower prices than the equivalent i7 CPUs (Xeons with 4 cores+HT and lots of cache were sold at the same price as the quad or dual+HT i5s), but this was "fixed" and now the cheap Xeons are crippled (no HT anymore).
Don't know about the quality of the support†, or what "decent" onboard sound is, but Supermicro sells a bunch of boards that support 1 or 2 Xeons (and generally also support consumer chips), ECC and have onboard sound. E.g. for the low, low price of $289 this board http://www.newegg.com/Product/Product.aspx?Item=N82E16813182... will support two processors, up to a half terabyte of memory, and has a Realtek ALC888 for 7.1 channels of sound.
†I'm using a one processor chip X9SAE to type this with, with HDMI output with sound included to my receiver and then monitor.
ECC support + decent onboard sound seems like a niche feature combination (at least in the Intel world where ECC support means buying a Xeon and a compatible server/workstation motherboard). Some AMD consumer boards do support ECC if you want to go down that route (e.g. http://www.asus.com/us/Motherboards/M5A97_PLUS/specification... ).
Not sure of your particular usage case but wouldn't it be just as easy to pick up a cheap (or decent) sound card that would meet or beat anything onboard could do? I personally tend to stick with onboard because it does what I need (sound for music and movies) and I use external sound interfaces when I want to mess with anything requiring more sound performance/options (like recording or making my pitiful attempts at producing music).
ECC memory has a small performance overhead, and is slightly more expensive (e.g. 12%).
It is segmentation for sure -- the chipsets and processors can easily support it -- and many of the people throughout this thread really should only be looking at Xeons. If 64GB of RAM and non-ECC memory is really such a problem, which it is for a vanishingly small percentage of users, go with a workstation chip at a small premium and have it all.
It's notable that effectively no mobile devices use ECC memory. Tablets don't. The vast majority of desktops don't. If you believed the rhetoric, these should all be crashing and destroying lives regularly. It turns out that ECC comes into play very, very infrequently. I would never buy a server that didn't have ECC, but on my desktop it just really doesn't matter that much.
What's the performance overhead of ECC come from? I know registered memory has a small but measurable latency penalty, and ECC modules are often also registered, but a simple parity check in the memory controller should be a pretty fast circuit.
we're getting to the point where you could copy the entire working set of the OS to memory
I've run diskless, boot-from-network systems with 2 to 4 GB RAM with the entire OS+data+servers in memory. If you know your exact sizes, you don't need giant RAM.
As far as desktop or interactive systems go, in 2009 I made a desktop with 24 GB RAM which essentially never read from the disk after everything got cached appropriately.
save to disk periodically.
That's what your kernel does. There's a long history of flushing dirty pages back to disk.
My dream machine would be
It looks like you want a MacBook Pro from 3 years ago?
I have to admit, at the moment, the one thing that fills my laptop memory the most frequently are browser tabs. There has got to be a better way -- and I know there are solutions, but their UX have failed me so far. Hardware advances are nice, but software/UX still has a long way to go.
> 64GB! Sixty-four! For a desktop/laptop architecture! At 64GB (GiB?) main memory we're getting to the point where you could copy the entire working set of the OS to memory from disk on boot and then simply save to disk periodically. So long as you have a battery/UPS that'd work. Is that a crazy?
That would be very unreliable, would cause huge loading times, huge powerdown times. Also modern OS uses free memory as a disk cache, so you are already getting all speed benefits.
So basically you might just do find / -type f -exec cat {} > /dev/null \; to fill your disk cache if you have enough memory.
I use the Slitaz distribution on my old Compaq computer from 1999, and wow! Even with only 384 MB of RAM, the system is blazing fast, all thanks to copy-to-ram!
A 64GB maximum is seriously disappointing - I upgraded my 3 year old pc to its 32GB maximum a few months ago and right now it's using all that plus 27GB of swap. I could easily and usefully use much more.
Until we have persistent memory. 3D X-point is a pragmatic step in the direction. It would be weird to reorganize a system around lots of RAM and persistent dram/flash hybrid.
Dell's XPS[1] range should eventually get a refresh also, but Dell seems to have a habit of launching laptops juuust before Intel does a refresh which is myopic or careless or I don't know what.
Looking at those benchmarks[2] the Skylake i5 is no slouch but I just have an aversion to the i5 for no rational reason whatsoever. Seems to be clocked lower and doesn't have hyper-threading. If you're going to get the top of the line CPU and chipset you may as well go for the i7 rather than the i5. is my thinking. I guess the i5 exists to create mid-range gear -- pair an i7 with a dedicated GPU, i5 uses integrated; i7 gets a retina panel w/ touch, i5 gets 1080p and no touch; i7 gets more RAM, i5 gets less; and so on.
Dell probably does that so that they can quickly benefit from the following price drop. Most of their market will not even know what the latest processor is, anyway.
I can't see myself retiring by i5-3570k since it's not a sufficient bump in performance for me (even with DDR4 quad channel). So, I'll have to wait and see what Intel does with its iGPU technology. If the iGPU technology gets as good as a gtx 970 (or r9 290x) then I might buy one of their next generation CPUs when they come to market. :/
From what I've read, the Skylake GPU capability is a step backwards from the Broadwell ones, though it may be that's because the audience for the two chips released so far is considered to be gaming enthusiasts, who will probably have one or more discrete cards anyway.
Yeah, it just seems like only AMD understands that you have to marry the GPU with the CPU aspects of the processors now. And I don't know if AMD can keep it up at this rate considering some of the projections hint at a potential default around 2020.
CPUs seemed to hit their sweet spot around 2010 and have not made a huge amount of practical progress since. Up until about 2 weeks ago my gaming PC was running a i5 750 a processor released back in 2009 and still able to run all but the most demanding games on max settings GTA5 being the exclusion. Even my video cards (GTX660ti in SLI) are on older side having been released back in 2012/2013?.
Recently My PSU Blew and took the motherboard with it so I purchased a i5 4460/Z97 Combo for about $300 from Amazon which clears up the bottleneck in GTA5. This CPU is already a year old but I'm glad to see it's within 1-2FPS of the new chips in most games. Most likely won't even need to look at upgrading for another 5 years.
I think to some extent intel is backing themselves into a corner with this release, their last generation was so good (i5 4690k or 4460) that Skylake seems rather lackluster.
Was looking at this earlier today as my machine is about 4 years old now.
From what I can tell, moving from my i5-2400 (Sandy Bridge) to the i7 6700K (Skylake) would apparently buy me about a 70% performance boost. But then moving to a 4790K (Haswell) gets me a 69% boost. And I could get a 40% boost by buying a 3770K (Ivy Bridge), and then I wouldn't even need a new motherboard, RAM etc....
Skylake supports DDR4 and DDR3L but the slots are incompatible; motherboard manufacturers need to choose one or the other. All the Z170 boards I've seen do not have DDR3L slots. Even if they did, DDR3L is lower voltage than DDR3; standard DDR3 will never work in a Skylake board.
It's not clock-for-clock or anything like that, that was comparing the absolute top of the range desktop Haswell (4790K) to the one Skylake i7 released so far. Or rather comparing both against my i5 2400.
I'm not exactly sure what to do now. I probably don't actually need a processor upgrade anyway, the graphics card is the more important part.
From what I read it's actually more 8-10% increase (bigger in some cases, smaller in others). Still not much, but good enough.
More important for me are other stuff we get with new architectures (support for Thunderbolt 2, better integrated graphics for low power laptop use etc).
Yeah there do seem to be loads of platform-related improvements since sandy-bridge.
The thing that's got me thinking about upgrade recently was deepdream, which probably doesn't care about any of those and just needs more raw power, on the CPU and the GPU.
Not that I'll probably care about that when my current fascination with it wears off in a couple of weeks...
I'm still running a Q6600 + 8GB RAM I bought in 2007.
I don't really do gaming, so my perf demands aren't as high as yours. But if you'd told me, in 2007, when I bought that CPU, that it'd still be usable - much less sufficient - 8 years later, I'd have thought you were crazy.
Yeah. I have an i2500k (sandy bridge), not even overclocked, and about a month ago I upgraded my GPU from a GTX 570 to a 970. I can run all modern games at max settings (1900x1200). Occasionally I've needed to turn shadow quality from "extreme" to "very good" but that's about it.
As I'm slaving over a hot laptop in the middle of summer, I have to ask. Will this generation allow for cooler machines?
To be honest I write text files and email for a living. Can anyone recommend the coolest yet snappy laptops? With plenty of memory, min 16gb, "retina" display. Perhaps next year's Macbook with no fan is a candidate? Though I don't mind a fan.
I have a "Intel(R) Core(TM) i7 CPU Q 720 @ 1.60GHz" with "Madison [Mobility Radeon HD 5730 / 6570M]", which is a few years old. Looking for something new that has half the heat output, and is hopefully faster.
So I'm planning on building a new gaming rig because my current PC is still running Sandy Bridge. Would you guys recommend Skylake? I found this MoBo + CPU + Memory package on Newegg for $500:
From my understanding, the i5 CPUs are great for gaming and i7 is great for running multiple applications at the same time. I appreciate any help, thanks!
The biggest thing I've seen that Skylake provides for high-performance gaming is 26 PCIE lanes.
This means you can run a PCIE 3.0 4x+ lane NVMe SSD like the Intel 750 while also running a 3.0 16x GPU (or two GPUs in SLI/CrossFireX). Given an appropriate board, anyway - I haven't looked at Skylake boards.
The performance increase of PCIE NVMe SSDs over SATA SSDs is amazing. SSDs are already great of course, but the Intel 750 can decrease load times by another order of magnitude.
Looking at the linked motherboard, its PCIE 3.0 slots are shared bandwidth, which isn't taking advantage of Skylake at all.
Edit: Actually, looking at the launch boards in Anandtech's review, most of them are routing the extra PCIE 3.0 lanes to M.2 ports (potentially a step backwards from U.2 but forwards from SATA). Getting maximum performance out of a NVMe SSD is seemingly still going to be restricted to only certain high-end boards for the time being.
At a glance, it looks like the "MSI Z170A Gaming M9 ACK" board has a dedicated 3.0 x4 slot separate and supports U.2, so there's at least some available, but it's $400.
Edit2: Apparently Skylake M.2 slots can provide full 3.0 x4 performance, so they're probably fine.
Note that most/all of the current M.2 SSDs are running as AHCI devices, not NVMe. There's an OEM Samsung NVMe unit not available for retail yet. Apparently the performance difference between the AHCI and NVMe version isn't big, anyway.
Edit: here's the key line regarding a modest improvement for small random access:
> Typically the NVMe version offers about 10-20% improvement in average latency over the AHCI version, which is a healthy boost in performance given that the two utilize identical hardware.
I'm about to build a new rig as well and was waiting on Skylake to see if I wanted to jump in. After seeing the reviews and price points, I'm going to pass and go with Haswell. The pros do not justify the price premium to me. Right now the difference for me is around $150, which I'd rather sink into the video card.
Also, I've read that the Skylake CPUs don't come with a cooling solution, you have to buy your own third-party cooling solution which adds to the cost. I know many people think the stock cooling sucks, which it does for overclocking. But if you don't overclock, the best reason for a third-party cooling solution is to reduce noise.
Currently I'm planning on an i5 as I've never seen an advantage to having an i7 for gaming. An i5 and a good video card is all you need.
When pricing on a budget, yes, $30 can be a big deal. Let's see, $250 CPU + $30 cooler (realistically, should be higher) versus $200 CPU that suits my needs just fine. Gives me an extra $80 to maybe upgrade elsewhere that might make a bigger difference to my goals.
Why exactly is a $400 SSD suggested to go with Skylake for gaming? Why not a $100 SSD? What does the size of the SDD have to do with Skylake? I'm failing to understand your point.
If you're not interested in spending the money to get top-tier performance, there's absolutely no reason to buy Skylake right now.
The Intel 750 is NVMe, the next generation of storage interface. The 750 can outperform a SATA SSD by 10x in reads, and even versus an outstanding SATA SSD will get you at least a 4x increase.
Load times are actually pretty important for games, and after a GPU I don't think there's anywhere you could spend extra that would give you more quality of life.
You'd already be looking at $250 for a similarly-sized high-performance SATA SSD, so the price difference isn't that large. You aren't getting high performance even as far as SATA for $100.
I can recommend the i5 4460 which is about ~$180. It's a locked 4690k for about $60 less but honestly is able to play even the most demanding games on Ultra settings (Witcher 3 / GTA 5) if pared with a good GPU. I would advise picking up a Z97 Motherboard to go with it so if you do want to upgrade the processor in a few years you have a path to the higher end of the Haswell line.
If you buy a locked i5, consider a H97 board instead. Z97 allows for overclocking, which you can't do with a locked CPU. Just match the features you want for a bit less. Most H97 boards will cover the Haswell line just fine, just no or limited overclocking.
Of course, if you find a Z97 that has what you want at a similar or lower price, then by all means go for it.
Z97 is useful because if you decide to upgrade to one of the higher end overclockable CPUs in the future you can. I found the difference in price can be really small or the same. I went with a Gigabyte Black board which is around ~$135 on Amazon and supports SLI. http://www.amazon.com/Gigabyte-GA-Z97X-UD3H-BK-Motherboard-I...
The two currently released Skylake CPUs are both unlocked (K) models. It may be worth waiting for the rest of the roll out to see what happens. It might be that Intel will ship a stock cooling solution for their locked models.
Locked models are reported to come out in Q3. I would imagine pricing will be similar. I doubt the performance would be much different so I can't see reasons for me to wait. If there was a thought it would drag prices of other CPUs down then it would be worth considering. But all the costs associated with Skylake at the moment with the new RAM type, new motherboard type, new CPU type and with all the premium pricing for these components is too much for too little to me.
The i5 is fine for running multiple applications at once. The i7 adds hyperthreading, which allows each core to act like two virtual cores to run parallelized calculations more quickly, but in some cases it actually comes out slower than with hyperthreading turned off. The only way to know for sure is testing, but games are almost certainly not optimized for this.
Save your money and get an older processor. Skylake gives petty performance increase which you won't even notice to be honest. Unless you're literally going to use it convert videos or stuff like that, it's useless
Are those absolute latency increases in nanoseconds, or just bigger numbers in clock cycles? The clock cycle counts will increase as effective clock rates increase, but the real latency in nanoseconds could still be the same or lower.
Realized time might be the same or lower than DDR3, but I was more interested in the cycle counts (RAS, CAS, etc..). Latencies typically do increase as frequency increases, but better modules usually still support lower latencies.
DDR4 uses a lower voltage, which is typically bad for latency, but I'm wondering if there's some other fundamental limitation to the standard.
- 64GB for desktop (finally!)
- 26 PCIe lanes (i.e. multiple fast M.2 PCIe NVMe, making 5k/4k RAW video previewing possible in real-time)
OK:
- very small performance increase comparing to Haswell two generations ago (Broadwell?)
- DDR4 performance matches XMP DDR3 performance; to see any difference you need to use quad channel configuration
- SGX, the jury is out on this one (double-edged sword, could improve security but also turn into a complete malware mess due to enclave isolation and inability to detect running encrypted botnets once they gain ring 0)
Cons:
- power consumption is up (!?)
- no AVX-512 for desktop