Showing headlines posted by dba477

« Previous ( 1 ... 10 11 12 13 14 15 16 17 18 19 20 ... 28 ) Next »

Set up GlassFish 4.1 Nova-Docker Container via phusion/baseimage on RDO Juno

The problem here is that phusion/baseimage per https://github.com/phusion/baseimage-docker should provide ssh access to container , however it doesn't. Working with docker container there is easy workaround suggested by Mykola Gurov in http://stackoverflow.com/questions/27816298/cannot-get-ssh-a... which is of no help in case of Nova-Docker Container.

Building docker container for GlassFish 4.1 via phusion/baseimage on CentOS 7

Building docker container for GlassFish 4.1 via image phusion/baseimage allows to execute several scripts been placed in folder /etc/my_init.d. In particular, not only run.sh coming from https://registry.hub.docker.com/u/bonelli/glassfish-4.1/ , but also script database.sh starting up Derby Database have been placed in mentioned folder, what actually does completely functional load GlassFish 4.1 docker container. The core issue in ([1]) is attempt to extend JAVA:8 , what causes a problem when starting several daemons during docker container load

Running Glassfish 4.1 in Nova-Docker Comtainer on RDO Juno

This post follows up http://www.linux.com/community/blogs/133-general-linux/79956... Docker image been built bellow has pre-installed JDK 1.8 and GlassFish 4.1, providing ssh connect to Nova-Docker container ( launched via this image ) it allows initialize Glassfish with JPA support manually.

Running Oracle XE 11gR2 in Nova-Docker container on OpenStack RDO Juno (CentOS 7)

Docker image arahman/docker-oracle-xe-11g:latest allows to build Nova-Docker Container on RDO Juno running Oracle XE instance, which may be accessed remotely via floating IP assigned to nova instance. Several network configuration files require tuning with Nova system instance-name and floating IP assigned from neutron external pool.

Xen Virtualization on Linux and Solaris

Recently Filip Krikav made a fork on github and created a Juno branch using the latest commit + fixing the problem of loading an image from glance.

Running Nova-Docker on OpenStack RDO Juno (CentOS 7)

Quote (http://technodrone.blogspot.com/2014/10/nova-docker-on-juno....) The Docker driver is a hypervisor driver for Openstack Nova Compute. It was introduced with the Havana release, but lives out-of-tree for Icehouse and Juno. Being out-of-tree has allowed the driver to reach maturity and feature-parity faster than would be possible should it have remained in-tree. It is expected the driver will return to mainline Nova in the Kilo release.

How VMs access metadata via qrouter-namespace in Juno

It is actually an update of http://techbackground.blogspot.ie/2013/06/metadata-via-quant... for Neutron on Juno ( original blog considers Quantum implementation on Grizzly ). From my standpoint understanding of core architecture of Neutron openstack flow in regards of nova-api metadata service access (and getting proper response from nova-api ) by VMs launching via nova causes a lot of problems due to leak of understanding of core concepts.

Tuning RDO Juno CentOS 7 TwoNode Gluster 3.5.2 Cluster for Qemu integration with libgfapi to work seamlessly

This post is focused on tuning replica 2 gluster volume when building RDO Juno Gluster Cluster on CentOS 7. Steps undertaken come from Gluster 3.5.2 Release Notes (http://blog.nixpanic.net/2014_07_01_archive.html) and make integration Qemu (1.5.3) && libgfapi really working

LVMiSCSI cinder backend for RDO Juno on CentOS 7

Current post follows up http://lxer.com/module/newswire/view/207415/index.html RDO Juno has been intalled on Controller and Compute nodes via packstack as described in link @lxer.com. iSCSI initiator implementation on CentOS 7 differs significantly from CentOS 6.5 and is based on CLI utility targetcli and service target. With Enterprise Linux 7, both Red Hat and CentOS, there is a big change in the management of iSCSI targets.Software run as part of the standard systemd structure. Consequently there will be significant changes in multi back end cinder architecture of RDO Juno running on CentOS 7 or Fedora 21 utilizing LVM based iSCSI targets.

RDO Juno Set up Two Real Node (Controller+Compute) Gluster 3.5.2 Cluster ML2&OVS&VXLAN on CentOS 7

Post bellow follows up http://cloudssky.com/en/blog/RDO-OpenStack-Juno-ML2-VXLAN-2-... however answer file provided here allows in single run create Controller && Compute Node. Based oh RDO Juno release as of 10/27/2014 it doesn't require creating OVS bridge br-ex and OVS port enp2s0 on Compute Node.It also doesn't install nova-compute service on Controller. Gluster 3.5.2 setup also is performed in way which differs from similar procedure on IceHouse && Havana RDO releases.

Understanding Packet Flows in OpenStack Neutron

A Neutron setup is composed of numerous interfaces,such as br-int,br-tun,br-ex, eth1/2/3. For beginners it's usually hard to understand what route packets will take through these devices and hosts. So let's take a closer look

Setup QCOW2 standard CentOS 7 cloud image to work with 2 VLANs on IceHouse ML2&OVS&GRE System

Notice, that same schema would work for any F20 or Ubuntu QCOW2 cloud images via qemu-nbd mount and increasing number of NICs interface files up to 2,3,... Approach suggested down here is universal. Any cinder volume been built up on updated glance image ( 2 NICs ready ) would be 2 NICs ready as well

Setup CentOS 7 cloud instance on IceHouse Neutron ML2&OVS&GRE System

CentOS 7.0 qcow2 image for glance is available now at http://openstack.redhat.com/Image_resources. Regardless dhcp-option 26,1454 is setup on system current image loads with MTU 1500. Workaround for now is to launch instance with no ssh keypair and having postinstallation script

Setup Gluster 3.5.2 on Two Node Controller&Compute Neutron ML2&VXLAN&OVS CentOS 7 Cluster

This post is an update for previous one - RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&VXLAN Cluster on CentOS 7 http://bderzhavets.blogspot.com/2014/07/rdo-setup-two-real-n... It's focused on Gluster 3.5.2 implementation including tuning /etc/sysconfig/iptables files on Controller and Compute Nodes CentOS 7.

RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&VXLAN Cluster on CentOS 7

Two boxes have been setup , each one having 2 NICs (enp2s0,enp5s1) for Controller && Compute Nodes setup. Before running `packstack --answer-file=TwoNodeVXLAN.txt` SELINUX set to permissive on both nodes.Both enp5s1's assigned IPs and set to promiscuous mode (192.168.0.127, 192.168.0.137 ). Services firewalld and NetworkManager disabled, IPv4 firewall with iptables and service network are enabled and running. Packstack is bind to public IP of interface enp2s0 192.169.1.127, Compute Node is 192.169.1.137 ( view answer-file ).

RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&VLAN Cluster on CentOS 7

As of 07/14/2014 Bug https://ask.openstack.org/en/question/35705/attempt-of-rdo-a... is still pending and workaround suggested above should be applied during two node RDO packstack installation.Successful implementation of Neutron ML2&&OVS&&VLAN multi node setup requires correct version of plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini which appears to be generated with errors by packstack.

Setup Light Weight X Windows environment (Enlightenment) on Fedora 20 Cloud instance

Needless to say that setting up Light Weight X environment on Fedora 20 cloud instances is very important for comfortable work in VM's environment, for instance on Ubuntu Trusty cloud server just one command installs E17 environment `apt-get install xorg e17 firefox`. By some reasons E17 was dropped from official F20 repos and maybe functional only via previous MATE Desktop setup on VM

RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&VLAN Cluster on F20

Successful implementation of Neutron ML2&OVS&VLAN multi node setup requires correct version of plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini which appears to be generated with errors by packstack. Several days playing with plugin.ini allowed me to build properly working system

RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&GRE Cluster on F20

Finally I've designed answer-file creating ml2_conf.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini..

RDO IceHouse Setup Two Node Neutron ML2&OVS&GRE Cluster on Fedora 20

Two KVMs have been created , each one having 2 virtual NICs (eth0,eth1) for Controller && Compute Nodes setup.

« Previous ( 1 ... 10 11 12 13 14 15 16 17 18 19 20 ... 28 ) Next »