Plaetinck, Dieter
2012-07-11 21:49:44 UTC
Hi,
I'm evaluating crowbar again (in vbox) and trying to figure out if we should use it on our physical hardware.
we're expecting a shipment of 2xR320 (swift proxies) and 6xR720xd (2x300GB for os + 12x3TB for swift storage) (ETA late july/early august),
which we'll use for testing at first and to handle (hopefully) a part of production load later in august.
I have a bunch of questions:
1) crowbar deployment guide is still marked as version 1.2, december 2011. is this still up-to-date?
2) can we just install the crowbar admin node in a VM, as long as nodes can pxe-boot from it, it's good right? AFAICT it seems
the admin node comes with its own ntp/dns/... server so there doesn't seem to be any external requirement or something like that
3) i'm a bit confused by all the released versions. i presume 2.0 will be too late for us, so our best bet is 1.4? (and for now, the 1.4 RC's).
i seems crowbar releases are tied to openstack and hadoop releases; even though swift gets more frequent releases than openstack.
(see https://launchpad.net/swift/+milestones)
is there a good way to get an up-to-date version of swift, and also can i
easily update swift after the nodes have been installed with openstack?
4) if, once we deployed a production cluster with crowbar+chef, we find that things are going wrong with either crowbar or chef, and we need to
interfere quickly, is it safe to just disable the chef client on the nodes and change the systems manually?
and once we fixed things in crowbar/chef, we can enable the chef daemons again?
5) where do we get the proprietary barclamps, like the raid/bios/ilom/drac/bmc ones? is there a cost involved?
6) what's the difference between the proprietary dell crowbar edition and the opensource one? is it just the proprietary barclamps or is it a different codebase/release schedule, etc?
7) is crowbar ever going to be available as a package you can install on an existing machine?
say we have an existing VM that already has chef running and we just want to "add" crowbar to it.
8) so crowbar by default names nodes after the mac address of the admin interface. is there a way to easily spot the right physical node, i.e. by putting the mac address on the front lcd panel?
since the mac address might not fit on the lcd, the service tag is probably a bit shorter so maybe use that instead?
9) according to the users guide, you can install ubuntu or redhat (for client nodes?).
is it possible to also do a network install of centos 6? using kickstart? (and will all the barclamps still work?) is this just as supported as ubuntu?
10) should we expect any issues with the 3TB swift storage disks?
11) the users guide mentions the admin web interface has a menu that shows links to alerts and vlans pages,
however I don't see any of these. (i only have nodes-> dashboard,bulk edit; barclamps -> all barclamps, crowbar, openstack, and a help menu)
I would especially like to see the vlan matrix to get a better understanding of how the networking fits together
12) networking setup:
for admin and external traffic networks, i guess it's just a matter of reserving a range in both our existing networks, and filling these ranges in
@ /opt/dell/chef/data_bags/crowbar/bc-template-network.json
However our "production network" (external traffic) network is not tagged,
so i'm not sure how -for example- our swift clients would speak to the swift proxies (with the default config of tagging the traffic)
so I guess i should disable vlan tagging for the external traffic network?
Also I'm a bit confused by the admin and bmc networks being listed with a vlan id, but they are not tagged, so practically speaking that means it's not vlanned, right?
(I can reach admin/bmc networks without vlanning)
Btw, IIRC The nodes have (other than the BMC interface), 4 1Gbps ifaces. we have two switches with Gbps ports per rack. is it a good idea to bond all 4 ifaces (2 cables to each switch), and put all networks over this bonded interface, or would you suggest to physically separate the networks more? (the BMC links are already separate so I think that's ok?)
13) is there any example out there on how to do more advanced swift zone mapping (using specific nodes, disks etc).
I know I have to modify the "disk_zone_assign_expr" field in the json editor of the proposal but an example would be nice.
14) please resolve the bugtracker situation. I find it hard to have faith in an open source project that has no visible bug tracker
15) anything else I'm forgetting? :)
thanks,
Dieter
I'm evaluating crowbar again (in vbox) and trying to figure out if we should use it on our physical hardware.
we're expecting a shipment of 2xR320 (swift proxies) and 6xR720xd (2x300GB for os + 12x3TB for swift storage) (ETA late july/early august),
which we'll use for testing at first and to handle (hopefully) a part of production load later in august.
I have a bunch of questions:
1) crowbar deployment guide is still marked as version 1.2, december 2011. is this still up-to-date?
2) can we just install the crowbar admin node in a VM, as long as nodes can pxe-boot from it, it's good right? AFAICT it seems
the admin node comes with its own ntp/dns/... server so there doesn't seem to be any external requirement or something like that
3) i'm a bit confused by all the released versions. i presume 2.0 will be too late for us, so our best bet is 1.4? (and for now, the 1.4 RC's).
i seems crowbar releases are tied to openstack and hadoop releases; even though swift gets more frequent releases than openstack.
(see https://launchpad.net/swift/+milestones)
is there a good way to get an up-to-date version of swift, and also can i
easily update swift after the nodes have been installed with openstack?
4) if, once we deployed a production cluster with crowbar+chef, we find that things are going wrong with either crowbar or chef, and we need to
interfere quickly, is it safe to just disable the chef client on the nodes and change the systems manually?
and once we fixed things in crowbar/chef, we can enable the chef daemons again?
5) where do we get the proprietary barclamps, like the raid/bios/ilom/drac/bmc ones? is there a cost involved?
6) what's the difference between the proprietary dell crowbar edition and the opensource one? is it just the proprietary barclamps or is it a different codebase/release schedule, etc?
7) is crowbar ever going to be available as a package you can install on an existing machine?
say we have an existing VM that already has chef running and we just want to "add" crowbar to it.
8) so crowbar by default names nodes after the mac address of the admin interface. is there a way to easily spot the right physical node, i.e. by putting the mac address on the front lcd panel?
since the mac address might not fit on the lcd, the service tag is probably a bit shorter so maybe use that instead?
9) according to the users guide, you can install ubuntu or redhat (for client nodes?).
is it possible to also do a network install of centos 6? using kickstart? (and will all the barclamps still work?) is this just as supported as ubuntu?
10) should we expect any issues with the 3TB swift storage disks?
11) the users guide mentions the admin web interface has a menu that shows links to alerts and vlans pages,
however I don't see any of these. (i only have nodes-> dashboard,bulk edit; barclamps -> all barclamps, crowbar, openstack, and a help menu)
I would especially like to see the vlan matrix to get a better understanding of how the networking fits together
12) networking setup:
for admin and external traffic networks, i guess it's just a matter of reserving a range in both our existing networks, and filling these ranges in
@ /opt/dell/chef/data_bags/crowbar/bc-template-network.json
However our "production network" (external traffic) network is not tagged,
so i'm not sure how -for example- our swift clients would speak to the swift proxies (with the default config of tagging the traffic)
so I guess i should disable vlan tagging for the external traffic network?
Also I'm a bit confused by the admin and bmc networks being listed with a vlan id, but they are not tagged, so practically speaking that means it's not vlanned, right?
(I can reach admin/bmc networks without vlanning)
Btw, IIRC The nodes have (other than the BMC interface), 4 1Gbps ifaces. we have two switches with Gbps ports per rack. is it a good idea to bond all 4 ifaces (2 cables to each switch), and put all networks over this bonded interface, or would you suggest to physically separate the networks more? (the BMC links are already separate so I think that's ok?)
13) is there any example out there on how to do more advanced swift zone mapping (using specific nodes, disks etc).
I know I have to modify the "disk_zone_assign_expr" field in the json editor of the proposal but an example would be nice.
14) please resolve the bugtracker situation. I find it hard to have faith in an open source project that has no visible bug tracker
15) anything else I'm forgetting? :)
thanks,
Dieter