tag:blogger.com,1999:blog-5421791030254053272023-11-16T09:32:18.520-08:00Scott's BlogOngoing posts of some things I'm working on or general observations made.Scott S.http://www.blogger.com/profile/07325635495904196575noreply@blogger.comBlogger13125tag:blogger.com,1999:blog-542179103025405327.post-21127705277595974922016-06-21T10:28:00.002-07:002016-06-21T10:28:36.092-07:00SMB sharing Recently while at home I thought a good way to use one of my idle systems would just be for some storage space for windows clients, for things like backups media sharing or whatever.<br />
<br />
Following http://www.oracle.com/technetwork/articles/servers-storage-admin/solaris-zfssmb-sharing-2390458.html is largely simple. Only thing I have trouble with is modifying the path that appears client side with the syntax of the sharesmb property. I've also been wanting to know more about the property 'vscan' as if you've got windows clients housing documents in this centralized location may as well do some checks.<br />
<br />
The docs on vscan don't give enough information on how to use this; you can configure, install the packages and enable the service but you must have some scan engine IDs which I cannot find reference on how to get those IDs. I'll have to look into that missing part.<br />
<br />
Still waiting on my replacement x99 board, at least a few weeks to a month I've been told...once I get this I'd probably do shadow migration or simply send a snapshot over to the rebuilt system.<br />
<br />
I also setup on the windows 10 client side file history with the SMB volumes and it was a pain to do. For some reason it kept on crashing and I had to disconnect and re-map the network drive, reset the config on file history etc before I managed to get it working to automatically backup windows files and specific paths/folders to the drive. While at it I setup copies=2 on some more important data server side. Perhaps I should also find out how I can setup autosnapshots in addition. Next task could be to configure some headless vbox windows hosts for people to use when needed.Scott S.http://www.blogger.com/profile/07325635495904196575noreply@blogger.com0tag:blogger.com,1999:blog-542179103025405327.post-21795565021318495472016-05-22T05:13:00.002-07:002016-05-22T05:13:56.144-07:00Unusual disk failuresDrives always seem to be failing or having problems. More often than I gave credit for. I had 3 drives do this at the same time just during a simple archive creation process. I wonder if I had a bad batch, then again the more drives you have the higher chance something will fail.<br />
<br />
<br />
NAME STATE READ WRITE CKSUM<br />
data DEGRADED 0 0 0<br />
raidz2-0 DEGRADED 0 0 0<br />
c0t50004CF210AD1C22d0 ONLINE 0 0 0<br />
spare-1 DEGRADED 0 0 249<br />
c0t50004CF210BE51F1d0 DEGRADED 0 0 70<br />
c4t0d0 ONLINE 0 0 0<br />
spare-2 DEGRADED 1 0 2<br />
c0t50004CF210BE51F3d0 UNAVAIL 0 0 0<br />
c4t1d0 ONLINE 0 0 0<br />
c0t50004CF210BE5214d0 ONLINE 0 0 0<br />
c5t3d0 ONLINE 0 0 0<br />
c4t3d0 ONLINE 0 0 0<br />
spares<br />
c4t1d0 INUSE <br />
c4t0d0 INUSE <br />
<br />
NAME STATE READ WRITE CKSUM<br />
rpool DEGRADED 0 0 0<br />
mirror-0 DEGRADED 0 0 0<br />
c0t500A0751F0096E9Ed0 DEGRADED 0 0 196<br />
c0t500A0751F0097DA7d0 ONLINE 0 0 0<br />
<br />
I attempted reading some more and ...<br />
<br />
NAME STATE READ WRITE CKSUM<br />
rpool DEGRADED 0 0 0<br />
mirror-0 DEGRADED 0 0 0<br />
c0t500A0751F0096E9Ed0 DEGRADED 0 0 1.00K<br />
c0t500A0751F0097DA7d0 ONLINE 0 0 0<br />
<br />
<br />
so 2 are degraded due to checksum errors attempting to read data back, other drive just seems to not be powered on at all. (L.E.D. on front inactive) why?<br />
<br />
I'll re architect the data pool, first I'll test out autoreplace and find out how it works. (I assume simply take disk out, put in new then all done). Will depend on HW support as well so, best test this. Made a comment in the Oracle Community - https://community.oracle.com/message/13836284#13836284<br />
<br />
zpool get autoreplace data<br />
NAME PROPERTY VALUE SOURCE<br />
data autoreplace on local<br />
<br />
in the end the Mobo was simply faulty, (went on fire) beside some small chips by the LSI SAS controller next to heat-sink.Scott S.http://www.blogger.com/profile/07325635495904196575noreply@blogger.com0tag:blogger.com,1999:blog-542179103025405327.post-37102061628305655542016-05-11T16:01:00.000-07:002016-05-13T11:21:15.113-07:00OpenStack meetingCame back from Fujitsu HQ in London, hearing some of the problems faced by the community and made a few contacts.<br />
<br />
A couple of presentations were made by various customers/companies and it looks to be used in some various ways. Had an Ubuntu guy showing this ontop of his laptop with KVM + LXD also known as "LXC 2.0" with ZFS underneath. I noticed he had 28% fragmentation on his zpool (only 1 vdev) which seems a little odd to me and wondering why that is. Running many of these LXD "containers" he used some lxc command to take snapshot (ZFS underneath) then something like rm -rf / and was after able to recover this from snapshot. <a href="https://insights.ubuntu.com/2016/03/22/lxd-2-0-your-first-lxd-container/" rel="nofollow" target="_blank">https://insights.ubuntu.com/2016/03/22/lxd-2-0-your-first-lxd-container/ </a><br />
<br />
So: Ubuntu - KVM - Systemd - LXD - ZFS - OpenStack <br />
<br />
Unusual viewpoint that seems backwards to me from this Ubuntu guy: "create lxd containers, each with different OpenStack service running per container to then run OpenStack on top of this?" - Reason was if you have more systems running and want to do some upgrade from distribution you can migrate those OpenStack services within those containers to another machine then do upgrade for example and migrate everything back... or if disk fails, memory fails etc...<br />
<br />
I get the feeling this is probably being done over complicated for one. Maybe it can be done easier but I haven't really played with how Ubuntu is doing these things.<br />
<br />
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators so could be handy to discuss with some of the developers maybe and look into more user cases, many people around the world are working on this and contributing. I reckon it'll be easier to see what they've done and go from there.<br />
<br />
I heard problems after mass deployment across 1,000+ linux hosts with KVM attempting upgrades across systems. From what I heard described it could be people rushing to use this and implement without first thinking longer term. What problems could we encounter? How to we manage upgrades? How will this scale?<br />
<br />
It appears might be possible to use something other than Neutron for SDN (software defined networking) We were shown some demo but didn't tell much. (more about pretty bubbly GUI) http://www.nuagenetworks.net/ https://www.youtube.com/watch?v=OjXII11hYwc&feature=youtu.be<br />
<br />
Apparently it was a big problem and no clean direct upgrade from Nova to Neutron. Required "entire rebuild" Nova is deprecated - http://docs.openstack.org/openstack-ops/content/nova-network-deprecation.html<br />
<br />
Also Fujitsu are creating a new type of software based from OpenStack. called "K5" doing "Iaas + PaaS" with 200+ more APIs than base OpenStack. Taking this on as an internal Global product to attempt saving millions in the process. (All done on Redhat & CentOS, not anything else.)<br />
<br />
It looks like everyone around is thinking around the same areas in this Cloud and Container space then how to make money out of it across larger scales of various customers. If so much focus is directed to this are we therefore missing on other aspects around that are happening? I will be following up on this more.Scott S.http://www.blogger.com/profile/07325635495904196575noreply@blogger.com0tag:blogger.com,1999:blog-542179103025405327.post-53962810180913802772016-05-02T21:48:00.000-07:002016-05-02T21:48:14.993-07:00Openstack beginningsLately been looking into using OpenS. From what I have gathered it looks to be better integrated into Solaris than Linux, although more up to date versions are more easily available on Linux to get hold of.<br />
<br />
Part of the OpenStack service "Glance" requires .uar (Unified Archives) for host deployments and so it is probably a preferred method to use .uar for installation of the zone/kernel zone across systems as well to keep this the same everywhere.<br />
<br />
I'm thinking that it'd be good to practise re-install a few times, reverse engineer what is inside the publicly available .uar from Oracle which we're using as a test bed. It was generated using 11.3 GA and further steps haven't been described in much detail. I want to customise the install to be lightweight and only contain what we need to make deployments faster so it'll be easier to scale at the same time when and if we get further down that road.<br />
<br />
I also will have to get a bit more used to the front end interface and think about what kind of "flavours" we could also configure for use. (type of zone + resources). I was surprised to discover a bug present that no-one looks to have found using the Archives for installation, in the manifest file to install you have a section like <br /> <br /> <software_data action="install"> <br /> <name>{deployable system name}</name> <br /> </software_data><br />
<br />
I was unsure what this <name> tag is for, I figured maybe zone name but turns out this is for the Deployable System name, the bug means this much match the name in the manifest for the archive otherwise it will fail to install with no useful output in the install log for why it failed. "list index out of range" it should work with any name but is recommended to match the same as the .uar file which is simple to check with archiveadm info <name of .uararchive file>. I further found other documents that I've had Oracle correct with minor typos on archives. <br />
<br />
Need to find out what different packages and services are required and how to configure these for the different node types, how to have this prepared out the box. Also want to setup some options on FS like compress and atime off where possible. One simple problem atm is trying to install from an archive I get an "ERROR: Archived zone oscn-uar-kz has no AI media" hmm... "Archive creation failed: Failed to locate AI media, --exclude-media may be used" but I cannot create without -e due to that... tried another zone and same error...Scott S.http://www.blogger.com/profile/07325635495904196575noreply@blogger.com0tag:blogger.com,1999:blog-542179103025405327.post-66400773164689596742016-02-26T10:24:00.001-08:002016-02-26T10:24:25.473-08:00Ed Oates, from Oracle Useful insights from one of the Oracle Co-founders <a href="https://vimeo.com/30929523">https://vimeo.com/30929523</a><br />
<br />
I think this part of the slide has a few good key points, as he describes in the vid. I found the first half the best and the second half a bit more business orientated on some specifics ( such as patenting )<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgG-0AYLExxpleglvJQ5AlKO21K1lDdCK5hrkX9INQbSX2cUUTfoPp43iE2gPYxXm1k7WO4rrrAnp71Q-38AF09NYraV5DZOsHCRmVv_0om0uWkWX8KSv_zH8mhhZfqSqY0C_H3g0xR-Bo/s1600/ed.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="218" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgG-0AYLExxpleglvJQ5AlKO21K1lDdCK5hrkX9INQbSX2cUUTfoPp43iE2gPYxXm1k7WO4rrrAnp71Q-38AF09NYraV5DZOsHCRmVv_0om0uWkWX8KSv_zH8mhhZfqSqY0C_H3g0xR-Bo/s400/ed.png" width="400" /></a></div>
<br />Scott S.http://www.blogger.com/profile/07325635495904196575noreply@blogger.com0tag:blogger.com,1999:blog-542179103025405327.post-1250177292673997072016-02-03T15:18:00.002-08:002016-02-03T15:18:26.530-08:00Great Server hardwareFeast your eyes on these:<div>
<br /></div>
<div>
This 90 bay 4U monster http://www.supermicro.co.uk/products/chassis/4U/946/SC946ED-R2KJBOD.cfm and it is just over 102 kg!</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
Go back around 10 years first one I know of is this http://techreport.com/blog/13849/behold-thumper-sun-sunfire-x4500-storage-server </div>
<div>
<br /></div>
<div>
<div>
48 NVMe? - https://www.supermicro.com/products/system/2U/2028/SSG-2028R-NR48N.cfm</div>
</div>
<div>
<br /></div>
<div>
http://www.theregister.co.uk/2016/02/01/product_blastoff_by_emc/</div>
<div>
<br /></div>
<div>
<br /></div>
Scott S.http://www.blogger.com/profile/07325635495904196575noreply@blogger.com0tag:blogger.com,1999:blog-542179103025405327.post-65932055427483467942016-01-07T15:34:00.000-08:002016-05-10T11:07:25.460-07:00Nvidia, raising standardsRecently looking into the changes from the current card I've got (Nvidia GTX 680) vs the newer ones for the hardware and architectural changes is interesting and where the next phases lead.<br />
<br />
This public doc is good as a showcase for some of the main reasons <a href="http://www.nvidia.co.uk/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf%C2%A0" rel="nofollow" target="_blank">Kepler-GK110-Architecture</a><br />
<br />
and compares what the goals and changes that have been done from Fermi to Kepler. I quite like some of this including the Atomic operation improvements for one.<br />
<br />
Gives some simplified points then in the next going from Kepler to Maxwell - <a href="http://devblogs.nvidia.com/parallelforall/5-things-you-should-know-about-new-maxwell-gpu-architecture/" rel="nofollow" target="_blank">Top 5 things to know about Maxwell</a><br />
<br />
Overview - <a href="http://devblogs.nvidia.com/parallelforall/maxwell-most-advanced-cuda-gpu-ever-made/" rel="nofollow" target="_blank">Maxwell</a><br />
<br />
This is great because you can be a card now with almost 1,000 cores for under £100 without requiring separate power connectors either. So saves money on buying a PSU to support it as you had to before and the electric bill will be lower. Doing this on a large scale simply means many more people will find upgrading their older systems/desktops more of a viable option whilst in the HPC and other distributed computing projects can spend less £ and save running costs on a larger infrastructure and now combined with this <a href="https://www.micron.com/about/blogs/2015/august/next-gen-graphics-products-get-extreme-speed-from-latest-graphics-memory-solutions" rel="nofollow" target="_blank">Micron 8GB GDDR5</a> are working to produce greater amounts of memory that can be used with a graphics card, combine this with Nvidia improvements will also benefit those who love high end games will probably like to transition to 4K monitors. So much more processing required across many more pixels, we also have things like <a href="http://x265.org/hevc-h265/" rel="nofollow" target="_blank">new video compression/decompression</a> <a href="https://en.wikipedia.org/wiki/High_Efficiency_Video_Coding" rel="nofollow" target="_blank">HEVC</a> being introduced as well. Just need GPU to decode...<br />
<br />
<a href="https://en.wikipedia.org/wiki/GeForce_1000_series" rel="nofollow" target="_blank">GeForce 1000 series </a> - the next gen, named "Pascal"<br />
<br />
Also this <span style="background-color: white; font-family: "trebuchet ms" , "arial" , "helvetica" , sans-serif; font-size: 13px; line-height: 18px;">NVLINK </span>is great! - <a href="http://www.nvidia.com/object/nvlink.htm" rel="nofollow" target="_blank">nvlink</a> <a href="http://blogs.nvidia.com/blog/2014/11/14/what-is-nvlink/" rel="nofollow" target="_blank">what is NVlink?</a>"5-12x higher bandwidth" etc<br />
<br />
Excellent news for HPC, I'd like to see progress made with nuclear fusion reactors for defo. - <a href="http://www.nvidia.com/object/exascale-supercomputing.html" rel="nofollow" target="_blank">exascale-supercomputing</a><br />
<div>
<br /></div>
<br />
The other point to mention is the clock speeds are lower in the newer architecture whilst still giving better performance. As the frequency of a clock is higher the amount of heat generated is higher too so therefore cooling in this case will be less noisy (if using a fan) as it won't be as hot. A CPU or GPU that runs at a higher clock rates require substantially more power. i.e. a 4GHz core will use much more than double power of a 2GHz core. (If assuming cores are of the same architecture). Can always look up "cpu clock speed vs power consumption" along with the charts, docs etc if anyone disagrees.<br />
<br />
Forgot to mention that it is also helpful to programmers & developers.. and it is recommended you read all the way through this doc due to some key considerations - <a href="https://www.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook-e1.pdf" rel="nofollow" target="_blank">Is parallel programming hard?</a><br />
<br />
<br />
<br />Scott S.http://www.blogger.com/profile/07325635495904196575noreply@blogger.com0tag:blogger.com,1999:blog-542179103025405327.post-8452133567625915042015-12-22T14:25:00.001-08:002015-12-22T14:27:17.046-08:00ZFS, like a work of artA subtle thing with ZFS is you'll notice how the drive L.E.D.s flash quite differently to typical storage arrays, when you understand more under the hood you'll know why that is. So just looking in a DC you'd be able to observe this across which servers for example. You can see this type of effect here to illustrate - https://www.youtube.com/watch?v=LS3cfl-7n-4<br />
<br />
ofc thats ZFS on linux.. which is implemented as a FUSE so less efficent than that of a FS in kernel space as elaborted across various posts, some examples: https://lkml.org/lkml/2007/4/16/133 , https://lkml.org/lkml/2007/4/16/83<br />
<br />
example pool using raidz2 with hot spares, which will autoreplace in the event a drive or 2 fail. Creating with brackets like this is always easier - c4t{0..1}d0. Also have to get the order of commands to be correct or you may be second guessing...<br />
<br />
# zpool create data c0t50004CF210AD1C22d0 c0t50004CF210BE51F1d0 c0t50004CF210BE51F3d0 c0t50004CF210BE5214d0 c4t{0..1}d0 raidz2<br />
Unable to build pool from specified devices: invalid vdev specification: raidz2 requires at least 3 devices<br />
<br />
# zpool create -o atime=off -o compress=lz4 data raidz2 c0t50004CF210AD1C22d0 c0t50004CF210BE51F1d0 c0t50004CF210BE51F3d0 c0t50004CF210BE5214d0 c4t{0..1}d0 <br />
# zpool add data spare c4t3d0 c5t3d0<br />
# zpool status<br />
pool: data<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
data ONLINE 0 0 0<br />
raidz2-0 ONLINE 0 0 0<br />
c0t50004CF210AD1C22d0 ONLINE 0 0 0<br />
c0t50004CF210BE51F1d0 ONLINE 0 0 0<br />
c0t50004CF210BE51F3d0 ONLINE 0 0 0<br />
c0t50004CF210BE5214d0 ONLINE 0 0 0<br />
c4t0d0 ONLINE 0 0 0<br />
c4t1d0 ONLINE 0 0 0<br />
spares<br />
c4t3d0 AVAIL <br />
c5t3d0 AVAIL<br />
<br />
Then as always test the assumption and it works as expected. I've got hot swap capabilities so pulled a drive out to simulate then try write some data and looks to have worked.<br />
<br />
# zpool status -xv<br />
pool: data<br />
state: DEGRADED<br />
status: One or more devices are unavailable in response to persistent errors.<br />
Sufficient replicas exist for the pool to continue functioning in a<br />
degraded state.<br />
action: Determine if the device needs to be replaced, and clear the errors<br />
using 'zpool clear' or 'fmadm repaired', or replace the device<br />
with 'zpool replace'.<br />
scan: resilvered 136K in 1s with 0 errors on Wed Dec 23 05:53:44 2015<br />
<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
data DEGRADED 0 0 0<br />
raidz2-0 DEGRADED 0 0 0<br />
c0t50004CF210AD1C22d0 ONLINE 0 0 0<br />
c0t50004CF210BE51F1d0 ONLINE 0 0 0<br />
spare-2 DEGRADED 0 0 0<br />
c0t50004CF210BE51F3d0 UNAVAIL 0 24 0<br />
c4t3d0 ONLINE 0 0 0<br />
c0t50004CF210BE5214d0 ONLINE 0 0 0<br />
c4t0d0 ONLINE 0 0 0<br />
c4t1d0 ONLINE 0 0 0<br />
spares<br />
c4t3d0 INUSE <br />
c5t3d0 AVAIL <br />
<br />
device details:<br />
<br />
c0t50004CF210BE51F3d0 UNAVAIL too many errors<br />
status: FMA has faulted this device.<br />
action: Run 'fmadm faulty' for more information. Clear the errors<br />
using 'fmadm repaired'.<br />
see: http://support.oracle.com/msg/ZFS-8000-FD for recoveryScott S.http://www.blogger.com/profile/07325635495904196575noreply@blogger.com0tag:blogger.com,1999:blog-542179103025405327.post-85274591901708983782015-11-03T09:43:00.001-08:002015-11-03T09:43:40.095-08:00ZFS born in Zion Interesting vids from the recent OpenZFS Summit 2015. Recommend you watch these - <a href="https://www.youtube.com/watch?v=dcV2PaMTAJ4&index=6&list=PLaUVvul17xSedlXipesHxfzDm74lXj0ab">https://www.youtube.com/watch?v=dcV2PaMTAJ4&index=6&list=PLaUVvul17xSedlXipesHxfzDm74lXj0ab</a><br />
<br />
As Jeff Bonwick explains around the time of ZFS conception that it has links to The Matrix. That's why Oracle documentation has things in there about Neo, Trinity, tank and Morpheus. Amazing film with memorable quotes: <br />
<br />
<i>Morpheus</i>: "You're faster than this. Don't think you are, know you are."<br />
<i>Morpheus</i>: "I'm trying to free your mind, Neo. But I can only show you the door. You're the one that has to walk through it"
<br />
<br />
Let's not forget he was also Cowboy Curtis - <a href="https://www.youtube.com/watch?v=3jsCxNK4vAc">https://www.youtube.com/watch?v=3jsCxNK4vAc</a> <br />
<br />
Lawrence and Samuel aren't the same person....<br />
<a href="https://www.youtube.com/watch?v=8Y1o8910Xs4">https://www.youtube.com/watch?v=8Y1o8910Xs4</a><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIVp6GnQaMaH0TRjYsNjhWVW5oG8vdmRNwBN9lnOrAkejZbOFBstHoE2RKYUY1Ar7hv7k_qm3rnmCQ9GCk1U0zo-cULJIxdboIrj_-DlymYoFw-ed48Oxwyiukifagdd8ZLtiv6G-eTRI/s1600/lol.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="280" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIVp6GnQaMaH0TRjYsNjhWVW5oG8vdmRNwBN9lnOrAkejZbOFBstHoE2RKYUY1Ar7hv7k_qm3rnmCQ9GCk1U0zo-cULJIxdboIrj_-DlymYoFw-ed48Oxwyiukifagdd8ZLtiv6G-eTRI/s320/lol.jpg" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<br />
<br />
Scott S.http://www.blogger.com/profile/07325635495904196575noreply@blogger.com0tag:blogger.com,1999:blog-542179103025405327.post-2839305964835980282015-11-01T15:38:00.000-08:002016-05-10T11:01:55.558-07:00Hardware or Software RAID?<div class="a-row">
<span style="font-family: inherit;">About 4-5 years ago when I first made a start on learning and using Linux one of the questions was towards RAID, given you have more than one way to skin a cat so to speak. Which way to skin it?</span></div>
<div class="a-row">
</div>
<div class="a-row">
I was told by a manager (and he was saying this with 100% solidity)"hardware RAID IS the best RAID". - I have yet to see this proven. </div>
<div class="a-row">
<br /></div>
<div class="a-row">
<u>Loose Background</u></div>
<div class="a-row">
<br /></div>
<div class="a-row">
Years ago hardware RAID used to be the better option as CPU's were considerably slower so whilst software RAID is constantly running will consume a fair amount of CPU resources (thus additional overhead) combined with the lack of well designed software RAID (or for example firmware RAID on older motherboards) meant you would be better of paying for a dedicated card to handle this as it also has things like BBU + cache so it is able to reorganise write operations prior to flushing to disk at same time keeping writes ready to be flushed even if power is temporarily out to maintain a consistent state.</div>
<div class="a-row">
<br /></div>
<div class="a-row">
Questions arised and can be asked such as:</div>
<div class="a-row">
</div>
<div class="a-row">
What if the hardware RAID card fails?</div>
<div class="a-row">
If software RAID is improved can we spend less money on HW?</div>
<div class="a-row">
Can rebuilds be done faster through software than hardware RAID?</div>
<div class="a-row">
Perhaps we should integrate LVM/VFS layer together? </div>
<div class="a-row">
Should software RAID be done user space or kernel space?</div>
<div class="a-row">
Is it possible to have software reorganize I/Os like hardware?</div>
<div class="a-row">
What happens to the state of the array if the cache after 72 hours is gone? </div>
<div class="a-row">
etc...</div>
<div class="a-row">
<br /></div>
<div class="a-row">
Linux mdadm is quite alot better, you also can use BTRFS or ZFS. I've played around removing drives and rebuilding etc using mdadm. I no longer bother now as I just use ZFS for all my storage needs. </div>
<div class="a-row">
<br /></div>
<div class="a-row">
In short Software RAID is now at a stage that it is faster than hardware RAID, provides end-to-end checksumming (so no data corruption), organizing writes to convert random writes into sequential writes (whilst providing dynamic block allocation) and can be very efficient in terms of it's resource usage.</div>
<div class="a-row">
</div>
<div class="a-row">
See Tomasz's comments throughout - <a href="http://markmail.org/message/6t6d7tp4yrneorzr#query:+page:1+mid:duk2cb3a6nzoai7a+state:results">http://markmail.org/message/6t6d7tp4yrneorzr#query:+page:1+mid:duk2cb3a6nzoai7a+state:results</a></div>
<div class="a-row">
</div>
<div class="a-row">
Test that compares software and hardware RAID by Robert - <a href="http://milek.blogspot.co.uk/2006/08/hw-raid-vs-zfs-software-raid-part-ii.html">http://milek.blogspot.co.uk/2006/08/hw-raid-vs-zfs-software-raid-part-ii.html</a></div>
<div class="a-row">
</div>
<div class="a-row">
and as referenced also from "Unix and <span class="matches">Linux</span> System Administration Handbook fourth edition"</div>
Scott S.http://www.blogger.com/profile/07325635495904196575noreply@blogger.com0tag:blogger.com,1999:blog-542179103025405327.post-83139621134348973042015-10-31T11:25:00.003-07:002015-10-31T11:28:37.931-07:00Microsoft is Evil!This link is funny<br />
<br />
<a href="http://toastytech.com/evil/index.html" rel="nofollow" target="_blank">http://toastytech.com/evil/index.html</a><br />
<br />
and on it within the links is my favorite message<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMKt6U2WikIlH6_fkFdaCIByhNnD4E9_nX-7IQRhGokcGHLTufxZ802gSFFQfJIC4Qlnv4Qccip0LnCn2mBZUMDlvamIqywm-To0rSsa3r9JRdh5LXXfrHVF8v0KyVwtTwmIny2oz_O04/s1600/microsoft.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMKt6U2WikIlH6_fkFdaCIByhNnD4E9_nX-7IQRhGokcGHLTufxZ802gSFFQfJIC4Qlnv4Qccip0LnCn2mBZUMDlvamIqywm-To0rSsa3r9JRdh5LXXfrHVF8v0KyVwtTwmIny2oz_O04/s1600/microsoft.png" /></a></div>
<br />
<br />
from - <a href="http://toastytech.com/evil/errwindows.html" rel="nofollow" target="_blank">http://toastytech.com/evil/errwindows.html</a><br />
<br />
you never know, maybe messages like that could exist! Scott S.http://www.blogger.com/profile/07325635495904196575noreply@blogger.com0tag:blogger.com,1999:blog-542179103025405327.post-21896422280488338742015-10-31T11:17:00.001-07:002015-10-31T11:17:46.756-07:00The best saying about StorageWhen I read this quote I quite liked it.<br />
<br />
"There are two things about hard drives, either they are going to fail, or they have failed."<br />
<br />
Thinking of it in that way means you won't (or shouldn't) rely on some known % failure rate statistics or thinking my RAID has this low chance of failing so I will be fine etc, as at some point you know they will fail. Enterprise quality or not.<br />
<br />
It is all well and good if you have a RAID array where you can suffer several drives failing at the same time and have spares ready to rebuild but have you asked what if another one fails before rebuild? What if they all fail? Ask this because in my and others experience when one thing goes wrong it just so happens it is when you need it most. (I think this is known as Murphy's Law) I've heard stories of someone telling me the chances are so low.. followed by but it just so happened on this one occasion and.. Also recently I suffered several drives fail within one month of one another after about 5-6 years of use (more on that one in another post)Scott S.http://www.blogger.com/profile/07325635495904196575noreply@blogger.com0tag:blogger.com,1999:blog-542179103025405327.post-15553661407113204902015-10-30T18:51:00.001-07:002015-10-30T19:06:44.500-07:00NVMe (focus on M.2) the latest paradigm shift<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYGaiam3RSXrbJZKr07-6ubzJwunyvKWeoyfYN9BQiykc28YsFyh14FnR0uel8ej3sARkonHOhScPv6HjXfLxWHUmoLQaE0UZYSqPrEV5cOj0Abt7kxfhI8O6xx8-6AUEWsl2sCLISInc/s1600/NVMe-M2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYGaiam3RSXrbJZKr07-6ubzJwunyvKWeoyfYN9BQiykc28YsFyh14FnR0uel8ej3sARkonHOhScPv6HjXfLxWHUmoLQaE0UZYSqPrEV5cOj0Abt7kxfhI8O6xx8-6AUEWsl2sCLISInc/s400/NVMe-M2.png" width="400" /></a></div>
I heard about this a few months back from my adviser and only just yesterday Samsung released the NVMe pro 950 M.2 SSD. A 256 and 512G version. This emerging tech has dramatic effects for the industry. Others don't appear to have realized or are even aware of the implications of NVMe (based on lack of comments from the <a href="http://www.theregister.co.uk/2015/10/23/intel_planned_nvme_for_xpoint/" rel="nofollow" target="_blank">posts I follow</a> and people I've spoken with.) but then again I haven't checked everywhere.<br />
<br />
This is why I've got myself a motherboard with 2 such M.2 Slots to utilize this (<a href="http://www.asrock.com/mb/Intel/X99%20Extreme11/" rel="nofollow" target="_blank">Asrock X99 extreme 11</a>), probably for use as <a href="https://blogs.oracle.com/brendan/entry/test" rel="nofollow" target="_blank">L2ARC</a>... I'll just hold off a bit longer as prices will most certainly drop. (The 512G version is about £300)<br />
<br />
What will it cause?<br />
<br />
The next generation of all future laptops, smart phones and other devices will integrate this in. (<a href="http://www.thessdreview.com/daily-news/latest-buzz/iphone-6ss-nvme-controller-and-speed-dictated-by-device-capacity/" rel="nofollow" target="_blank">infact iphone 6S already has this</a>) this allows all next gen hardware to probably be 10x faster than existing tech (Based on the fact that most operations machines are waiting on is storage I/Os.) Being as this architecture is so small it will replace more and more existing SSD's such as the 2.5" Sata based ones as it grows more commonplace. (why would you not want something much faster and power efficient?) because it is very efficient from wattage point of view running costs on larger scales will also be less, space required is much less to as additional layers are added to the silicon as opposed to the older plane/flat methods. Just compare the sizes of your typical 3.5", 2.5"storage devices to something the size of a large chewing gum stick, which at some point will be TBs in size.<br />
<br />
What is the future?<br />
<br />
I am aware that more production facilities are in the making to produce this on a larger scale with additional layers. Next year Samsung will almost certainly release a 1TB model with faster speeds. Not to mention other vendors will be in direct competition. For starters Laptops not using this will be phased out. My question is what is the max amount of layers that can be added?<br />
<br />
<br />Scott S.http://www.blogger.com/profile/07325635495904196575noreply@blogger.com0