Showing headlines posted by dba477

« Previous ( 1 ... 6 7 8 9 10 11 12 13 14 15 16 ... 28 ) Next »

Set up sshuttle connection to TripleO Overcloud been deployed via instack-virt-setup on remote VIRTHOST

Set up F24 as WKS for "TripleO instack-virt-setup overcloud/undercloud deployment to VIRTHOST" via ssh (trusted) connection . This setup works much more stable then configuring FoxyProxy on VIRTHOST running "instack" ( actually undercloud VM) hosting heat stack "overcloud" and several overcloud Controllers and Compute VMs

TripleO deployment of 'master' branch via instack-virt-setup on VIRTHOST (2)

Upstream gets close to Newton Release , bugs scheduled for RC2 went away. Following bellow is a clean and smoothly running procedure of Overcloud deployment TripleO Master branch via instack-virt-setup on 32 GB VIRTHOST Network isolation in overcloud is pre-configured on instack (undercloud )

TripleO Master Branch Overcloud deployment via QuickStart

Following bellow is set of instructions required to perform TripleoO QuickStart deployment for release "master". It differs a bit from testing default release "mitaka" . First on F24(23) workstation . . .

Switch to Overcloud with Network isolation been setup via Tripleo Master branch

This post follows up "TripleO deployment of master branch via instack-virt-setup" Launchpad bug "Updating plans breaks deployment" has status "In Progress" so to be able redeploy "overcloud" heat stack the following bellow workaround would be appiled.

TripleO deployment of 'master' branch via instack-virt-setup

Due to Launchpad Bug "introspection hangs due to broken ipxe config" finally resolved on 09/01/2016 approach suggested in TripleO manual deployment of 'master' branch by Carlo Camacho has been retested. As appears things in meantime have been changed. Following bellow is the way how mentioned above post worked for me right now on 32 GB VIRTHOST (i7 4790)

Emulation Triple0 QuickStart HA Controller's Cluster failover

Procedure bellow identify Controller which has RouterDSA in active state and shutdown/startup this Controller ( controller-1 in particular case). Then log into conntroller-1 and restart pcs cluster on particular Controller, afterwards runs `pacemaker resource cleanup` for several resources what results bringing back cluster nodes in proper status

TripleO QuickStart && Keeping undercloud persistent between cold reboots (newly polished)

Current update allows to automate procedure via /etc/rc.d/rc.local and exports is stack's shell variables which allow to start virt-manager right away , presuming that xhost+ was issued in root's shell. Thus, we intend to survive VIRTHOST cold reboot (downtime) and keep previous version of undercloud. VM been able to bring it up avoiding build via quickstart.sh.

Access to TripleO QuickStart overcloud via sshuttle running on F24 WorkStation

Sshutle may be installed on Fedora 24 via straight forward `dnf -y install sshutle`

So, when F24 has been set up as WKS for TripleO QuickStart deployment to VIRTHOST , there is no need to install add-on FoxyProxy and tune it on firefox as well as connect from ansible wks to undergound via...

Stable Mitaka HA instack-virt-setup on CentOS 7.2 VIRTHOST

The following is a step by step self sufficient instruction performing Mitaka HA instack-virt-setup on CentOS 7.2 VIRTHOST based on current Mitaka delorean repos. It follows official guide lines and updates undercloud with OVSIntPort vlan10 for br-ctlplane OVS bridge making posible HA and/or Ceph overcloud deployments with "Network Isolation" enabled.

TripleO QuickStart vs Attempt of official Mitaka TripleO HA install via instack-virt-setup

A final target of this post is to compare undercloud configuration been built by QuickStart and undercloud configuration been built per official documentation for Mitaka stable...

TripleO QuickStart HA Setup && Keeping undercloud persistent between cold reboots

This post follows up http://lxer.com/module/newswire/view/230814/index.html and might work as timer saver. We intend to survive VIRTHOST cold reboot (downtime) and keep previous version of undercloud VM been able to bring it up avoiding build via quickstart.sh, then restart procedure from loging into undercloud and immediately run overcloud deployment.

RDO Triple0 QuickStart HA Setup - Work in progress

This post follows up https://www.linux.com/blog/rdo-triple0-quickstart-ha-setup-i... In meantime undercloud-install,undercloud-post-install (openstack undercloud install, openstack overcloud image upload ) are supposed to be performed during original run `bash quickstart.sh --config ./ha.yml $VIRTHOST` . Neutron networks deployment on undercloud and HA Server's configuration has been significantly rebuilt during during the last weeks.

RDO Mitaka Virtual Deployment having real physical network as External

Nova-Docker driver is installed on Compute node which is supposed to run several Java EE Servers as light weight Nova-Docker Containers (instances) having floating IPs on external flat network (actually real office network 192.168.1.0/24) . General Setup RDO Mitaka ML2&OVS&VLAN 3 Nodes.

RDO Triple0 QuickStart HA Setup on Intel Core i7-4790 Desktop

This posting follows up "Deploying OpenStack on just one hosted server" http://lxer.com/module/newswire/view/229554/index.html, but is focused on utilizing i7 4790/4770 CPUs with inexpensive boards like ASUS Z97-P having 32 GB RAM.

Set up VM to connect Tripleo QuickStart Overcloud via Virt-manager GUI

Set up Gnome Desktop && VirtTools on Virtualization Server ( VIRTHOST ) and make remote connection to Virt-manager running on VIRTHOST (192.168.1.75). Then create VM via virt-manager as follows using standard CentOS 7.2 ISO image.

Triple0 QuickStart && First impressions

I believe the post bellow will bring some more light on TripleO QuickStart procedure suggested on RDO QuickStart page. Size of memory on Server 32 GB is a must. During minimal configuration runtime 23 GB of RAM are required.

Backport upstream commits to stable RDO Mitaka release && Deployments with Keystone API V3

Posting bellow is written with intend to avoid waiting until "koji" build will appear in updates repo of stable RDO Mitaka release, what might take a couple of months or so. Actually, it doesn't require knowledge how to write properly source RH's rpm file.It just needs picking up raw content of git commits from upstream git repo converting them into patches and rebuild required src.rpm(s) with patch(es) needed

Java EE Servers as Nova-Docker Containers && RDO Mitaka External vlan networks

Nova-Docker driver is installed on Compute node which is supposed to run two Java EE Servers as light weight Nova-Docker Containers (instances) having floating IPs on two different external vlan enabled subnets (10.10.10.0/24; 10.10.50.0/24). General Setup RDO Mitaka ML2&OVS&VLAN 3 Nodes.

Deploying OpenStack on just one hosted server

The RDO project recently re-bumped all their tools to both release and install OpenStack. TI was impressed by all the changes so I wanted to test it, and indeed, it works great now.

RDO Mitaka && several external networks VLAN provider setup

Post bellow is addressing the question when Controller/Network RDO Mitaka Node has to have external networks of VLAN type with predefined vlan tags. Straight forward packstack deployment doesn't allow to achieve desired network configuration. External network provider of vlan type appears to be required. In particular case, office networks 10.10.10.0/24 vlan tagged (157) ,10.10.50.0/24 vlan tagged (172) already exists when RDO install is running.

« Previous ( 1 ... 6 7 8 9 10 11 12 13 14 15 16 ... 28 ) Next »