Hi
user
Admin Login:
Username:
Password:
Name:
Ceph: object storage, block storage, file system, replication, massive scalability, and then some!
--client
lca
--show
lca2013
--room MCC6 1672 --force
Next: 1 Speaker's dinner
show more...
Marks
Author(s):
Florian Haas,Tim Serong
Location
MCC6
Date
feb Fri 01
Days Raw Files
Start
13:20
First Raw Start
error-in-template
Duration
1:40:00
Offset
None
End
15:00
Last Raw End
Chapters
Total cuts_time
None min.
http://lca2013.linux.org.au/schedule/30091/view_talk
raw-playlist
raw-mp4-playlist
encoded-files-playlist
mp4
svg
png
assets
release.pdf
Ceph_object_storage_block_storage_file_system_replication_massive_scalability_and_then_some.json
logs
Admin:
episode
episode list
cut list
raw files day
marks day
marks day
image_files
State:
---------
borked
edit
encode
push to queue
post
richard
review 1
email
review 2
make public
tweet
to-miror
conf
done
Locked:
clear this to unlock
Locked by:
user/process that locked.
Start:
initially scheduled time from master, adjusted to match reality
Duration:
length in hh:mm:ss
Name:
Video Title (shows in video search results)
Emails:
email(s) of the presenter(s)
Released:
Unknown
Yes
No
has someone authorised pubication
Normalise:
Channelcopy:
m=mono, 01=copy left to right, 10=right to left, 00=ignore.
Thumbnail:
filename.png
Description:
Ceph is one of the most exciting new technologies to recently emerge in the Linux storage space. Based on the RADOS object store, the Ceph stack boasts massive scalability and high availability using nothing but commercial, off-the shelf hardware and free and open source software. Ceph includes a massively distributed filesystem (Ceph FS), a striped, replicated, highly available block device (RADOS block device, RBD), S3 and Swift object storage capability through the RESTful RADOS Gateway, and a simple, well-documented native API with language bindings for C, C++ and Python. The Ceph filesystem and RBD have been part of the mainline kernel since the 2.6.3x releases, and the server-side stack has recently undergone an extensive cleanup and stabilization phase. The Ceph stack is also well integrated into OpenStack, making it a potential "one-stop shop" for OpenStack object, image and volume storage. In this hands-on tutorial, Florian and Tim will walk you through the initial setup of a Ceph cluster, explore its capabilities, highlight its most important features and identify current shortcomings, discuss performance considerations, and identify common Ceph failure modes and troubleshooting steps. Attendees should have a good understanding of Linux systems administration. Prior experience with distributed storage (like Lustre, GlusterFS, DRBD, Swift) is a plus but not required. Prior knowledge of the Ceph stack is not necessary. Attendees will have the opportunity to follow the tutorial in a virtual Ceph cluster of their own; pre-installed Libvirt/KVM virtual images will be available for that purpose. Co-presenter: Tim Serong, SUSE
markdown
Comment:
production notes
Rf filename:
root is .../show/dv/location/, example: 2013-03-13/13:13:30.dv
Sequence:
get this:
check and save to add this
Veyepar
Video Eyeball Processor and Review