Showing headlines posted by dba477
« Previous ( 1 ... 7 8 9 10 11 12 13 14 15 16 17 ... 28 ) Next »TripleO Installer - the Good, the Bad and the Ugly
TripleO installer is a great tool to deploy an OpenStack cloud. It’s backed by a large user community and doesn’t invent any new tools to install OpenStack.On the other hand, I’m somewhat sceptical about Heat being the right tool to do software configuration. Funneling the configuration options through the Heat templates down to the Puppet scripts seems cumbersome to me.
Creating functional ssh keypair on RDO Mitaka via Chrome Advanced REST Client
The problem here is that REST API POST request creating ssh-keypair to
access nova servers doesn't write to disk rsa private key and only upload
public one to nova. Closing Chrome Client results loosing rsa private key.
To prevent failure to write to disk private key , save response-export.json as shown bellow.
Creating Servers via REST API on RDO Mitaka via Chrome Advanced REST Client
In posting bellow we are going to demonstrate Chrome Advanced REST Client successfully issuing REST API POST requests for creating RDO Mitaka Servers (VMs) as well as getting information about servers via GET requests. All required HTTP Headers are configured in GUI environment as well as
body request field for servers creation.
Linux Containers (LXD) as an Alternative to VirtualBox for WordPress Development
What is LXD? By combining the speed and density of containers with the security of traditional virtual machines, Canonical’s LXD is the next?generation of container hypervisor for Linux.
Switching to newly added Storage node on RDO Mitaka
Suppose that your original answer-file been used for Controller/Network +(N)*Compute Node deployment has been updated to separate Storage Node from Controller. As far as updates above are correct new node will be added to landscape, however endpoints for storage services in keystone database wouldn't be updated.
Neutron work flow for Docker Hypervisor running on DVR Cluster RDO Mitaka && HA support for Glance storage
The issue came up in my previous posts is related specifically with ML2&OVS&VXLAN setup,RDO Mitaka deployment ML2&OVS&VLAN works with Nova-Docker (stable/mitaka) with no problems. Thus as quick and efficient workaround I suggest DVR deployment setup due to VXLAN tunneling is pretty much common on RDO
systems.
Setup Docker Hypervisor on Multi Node DVR Cluster RDO Mitaka
DVR && Nova-Docker Driver (stable/mitaka) tested fine on RDO Mitaka (build 20160329) with no issues described in previous post for RDO Liberty. So, create DVR deployment with Contrpoller/Network + N(*)Compute Nodes.
Setup Docker Hypervisor on Two Node Cluster RDO Mitaka
Perform two node cluster deployment Controller + Network&Compute (ML2&OVS&VXLAN). Another configuration available via packstack Controller+Storage+Compute&Network.
External Network Provider on RDO Mitaka Controller/Network&&Compute, ML2/OVS/VLAN - Configured
Following bellow is set of directives allowing to switch Openstack RDO Mitaka to using flat (vlan) external network provider,what allows to work with several external networks via single L3 router. Conversion supposed to be done doesn't depend in any way of how tenants are segregated either VLAN tagged networks or VXLAN (GRE) tunneling.
Python API for creating Neutron Router with internal interface and external gateway on RDO Mitaka
Python code posted down-here utilizes NeutronClient V2 API. It creates
neutron router for particular tenant, interface to private sub-net and gateway to external flat network. As usual posting was inspired by question been posted at ask.openstack.org. Code is intentionally simplified to become understandable by people without strong python development background.
Attempt to set up RDO Mitaka at any given time (Delorean trunks)
Quoting Official delorean documentation
"The RDO project has a continuous integration pipeline that consists of multiple jobs that deploy and test OpenStack as accomplished by different installers.This vast test coverage attempts to ensure that there are no known issues either in packaging, in code or in the installers themselves .. "
HA support for DVR centralized default SNAT functionality on RDO Mitaka Milestone 3
Verification been done bellow is actually targeting conversion of HAProxy/Keepalived
(Active/Active) 3 Node Controller which design was suggested for RDO Liberty in https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keep...
to be able support Compute Nodes running in DVR mode. The core issue on Liberty was resolved for Mitaka , see upstream record [RFE] Unable to create a router that's both HA and distributed
Setup DVR on RDO Liberty Controller && 2(x)Computes ML2/OVS/VLAN landscape
Just a reminder in Juno and Kilo DVR was available for deployments using VXLAN tunneling and required l2population activation on all nodes. One of new features of Liberty is DVR compatibility with ML2&OVS&VLAN deployed landscapes. On RDO Liberty packstack doesn't play so nicely doing VLAN deployment as in case of VXLAN tunneling
RDO Kilo ML2&OVS&VLAN Mutti Node Deployment on Fedora 23
Current post follows up "Hackery to get going RDO Kilo on Fedora 23"
To complete packstack run for two nodes Controller/Network and Compute
setup I had to apply as pre-installation following patches, otherwise neutron
puppet crashed on Fedora 23 . . .
Hackery to get going RDO Kilo on Fedora 23
Sequence of hacks required to get going RDO Kilo on Fedora 23 is caused by existence of many areas where Nova has a hard dependency on Glance v1. According to Launchpad Bug 1476770 I can see that for python-glanceclient status is "In Progress". So, to get things working right now only one option is available (for myself of course) . . . .
Setup Swift as Glance backend on RDO Liberty Multi node deployment (CentOS 7.2)
Post bellow presumes that your testing Swift storage is on VM emulating
Storage Server (192.169.142.157) which has 3 devices attached
/dev/vdb,/dev/vdc,/dev/vdd what allows to place in answer file entries like
bellow. In case you need not just a POC, but real HA Swift deployment install RDO with CONFIG_INSTALL_SWIFT=n and make manual deployment on your management network - one Swift proxy node and at least 3 Swift Storage nodes.
Setup Swift as Glance backend on RDO Liberty (CentOS 7.2)
Post below presumes that your testing Swift storage is located somewhere on workstation (say /dev/sdb1) is about 25 GB (XFS) and before running packstack (AIO mode for testing) following steps have been done . . .
Python API for boot from image creates new volume (RDO Liberty)
Post below addresses several questions been posted at ask.openstack.org In particular, code below doesn't require volume UUID to be hard coded to start server attached to boot able cinder's LVM, created via glance image, which is supposed to be passed to script via command line. In the same way name of cinder volume and instance name may be passed to script via CLI.
Adding new Compute Node to RDO Liberty Cluster && Getting EXCLUDE_SERVERS to work
This post briefly describes how to rebuild currently used openstack-packstack packages on CentOS 7.2 (RDO Liberty) and put in work patch proposed by bugzilla record
"Bug 1254389 - Can no longer run packstack to maintain cluster" right away, i.e. avoiding waiting official procedure of pushing new openstack-packstack packages to stable repos.
Hackery setting up RDO Kilo on CentOS 7.2 with Mongodb && Nagios up and running
I have noticed several questions (ask.openstack.org,stackoverflow.com ) regarding mentioned ongoing issue with mongodb-server and nagios when installing RDO Kilo 2015.1.1 on CentOS 7.2 via packstack. At the moment
I see a hack provided bellow which might be applied as pre-installation step or a fix after initial packstack crash.
« Previous ( 1 ... 7 8 9 10 11 12 13 14 15 16 17 ... 28 ) Next »