Thursday, 18 January 2018

Pivotal Cloud Foundry App Instance Routing in HTTP Headers

Developers who want to obtain debug data for a specific instance of an app can use the HTTP header X-CF-APP-INSTANCE to make a request to an app instance. To demonstrate how we can write a Spring Boot application which simply outputs the current CF app index so we are sure we are hitting the right application container.

Simplest way to do that is to define a RestController using Spring Boot as follows which then enables us to get the current application index and verify we are hitting the right container instance.

package com.example.pas;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

public class DemoRest
    private final String ip;
    private final String index;

    public DemoRest
            (@Value("${CF_INSTANCE_IP:}") String ip, 
             @Value("${CF_INSTANCE_INDEX:0}") String index) {
        this.ip = ip;
        this.index = index;

    public InstanceDetail getAppDetails()
        InstanceDetail instanceDetail = new InstanceDetail(ip, index);

        return instanceDetail;

So with the application deployed as see we have 3 instances as follows

pasapicella@pas-macbook:~$ cf app pas-pcf-routingdemo
Showing health and status for app pas-pcf-routingdemo in org pivot-papicella / space dotnet as

name:                pas-pcf-routingdemo
requested state:     started
instances:           3/3
isolation segment:   main
usage:               756M x 3 instances
last uploaded:       Thu 18 Jan 20:41:26 AEDT 2018
stack:               cflinuxfs2
buildpack:           client-certificate-mapper=1.4.0_RELEASE container-security-provider=1.11.0_RELEASE java-buildpack=v4.7.1-offline-
                     java-main java-opts java-security jvmkill-agent=1... (no decorators apply)

     state     since                  cpu    memory           disk           details
#0   running   2018-01-18T09:44:07Z   0.4%   224.8M of 756M   137.5M of 1G
#1   running   2018-01-18T09:44:13Z   0.8%   205M of 756M     137.5M of 1G
#2   running   2018-01-18T09:44:06Z   0.7%   221.1M of 756M   137.5M of 1G

Now lets simply access our application a few times using the "/" end point and verify we are accessing different application containers via round robin routing as per GoRouter

pasapicella@pas-macbook:~$ http
HTTP/1.1 200 OK
Content-Length: 34
Content-Type: application/json;charset=UTF-8
Date: Thu, 18 Jan 2018 09:58:10 GMT
Set-Cookie: dtCookie=6$B570EBB532CD9D8DAA2BCAE14C4277FC|RUM+Default+Application|1;; Path=/
X-Vcap-Request-Id: 336ba633-685b-4235-467d-b9833a9e6435

    "index": "2",
    "ip": ""

pasapicella@pas-macbook:~$ http
HTTP/1.1 200 OK
Content-Length: 34
Content-Type: application/json;charset=UTF-8
Date: Thu, 18 Jan 2018 09:58:15 GMT
Set-Cookie: dtCookie=5$3389B3DFBAD936D68CBAF30657653465|RUM+Default+Application|1;; Path=/
X-Vcap-Request-Id: aa74e093-9031-4df5-73a5-bc9f1741a942

    "index": "1",
    "ip": ""

Now we can request access to just the container with application index "1" as follows

1. First get the Application GUID as shown below

pasapicella@pas-macbook:~$ cf app pas-pcf-routingdemo --guid

2. Now lets invoke a call to the application and set the HEADER required to instruct GoRouter to target a specific application index


Example below is using HTTPie 

Accessing Instance 1

pasapicella@pas-macbook:~$ http "X-CF-APP-INSTANCE":"5bdf2f08-34a5-402f-b7cb-f29c81d171e0:1"
HTTP/1.1 200 OK
Content-Length: 34
Content-Type: application/json;charset=UTF-8
Date: Thu, 18 Jan 2018 10:20:31 GMT
Set-Cookie: dtCookie=5$FD08A5C88469AF379C8AD3F36FA7984B|RUM+Default+Application|1;; Path=/
X-Vcap-Request-Id: cb19b960-713a-49d0-4529-a0766a8880a7

    "index": "1",
    "ip": ""

Accessing Instance 2 

pasapicella@pas-macbook:~$ http "X-CF-APP-INSTANCE":"5bdf2f08-34a5-402f-b7cb-f29c81d171e0:2"
HTTP/1.1 200 OK
Content-Length: 34
Content-Type: application/json;charset=UTF-8
Date: Thu, 18 Jan 2018 10:21:09 GMT
Set-Cookie: dtCookie=7$53957A744D473BB024EB1FF4F0CD60A9|RUM+Default+Application|1;; Path=/
X-Vcap-Request-Id: 33cc7922-9b43-4182-5c36-13eee42a9919

    "index": "2",
    "ip": ""

More Information

Friday, 29 December 2017

Verifying PCF 2.0 with PAS small footprint with bosh CLI

After installing PCF 2.0 here is how you can verify your installation using the new bosh2 CLI. In this example I use "bosh2" BUT with PCF 2.0 you can actually use "bosh". bosh2 v2 existed for a while in PCF 1.12 and some previous versions while we left bosh v1

1. SSH into your ops manager VM as shown below, in this example we using GCP

2. Create an alias for your ENV as shown below

Note: You will need the bosh director IP address which you can obtain using

ubuntu@opsman-pcf:~$ bosh2 alias-env gcp -e y.y.y.y --ca-cert /var/tempest/workspaces/default/root_ca_certificate
Using environment 'y.y.y.y' as anonymous user

Name      p-bosh
UUID      3c886290-144f-4ec7-86dd-b7586b98dc3b
Version   264.4.0 (00000000)
CPI       google_cpi
Features  compiled_package_cache: disabled
          config_server: enabled
          dns: disabled
          snapshots: disabled
User      (not logged in)


3. Log in to the BOSH Director with UAA

Note: You will need the username / password for the bosh director which you can obtain as follows

ubuntu@opsman-pcf:~$ bosh2 -e gcp log-in
Email (): director
Password ():

Successfully authenticated with UAA


4. View all the VM's managed by BOSH as follows

ubuntu@opsman-pcf:~/scripts$ bosh2 -e gcp vms --column=Instance --column="Process State" --column=AZ --column="VM Type"
Using environment 'y.y.y.y' as user 'director' (bosh.*.read, openid, bosh.*.admin,, bosh.admin)

Task 65. Done

Deployment 'cf-adee3657c74c7b9a8e35'

Instance                                             Process State  AZ                      VM Type
backup-prepare/996340c7-4114-472e-b660-a5e353493fa4  running        australia-southeast1-a  micro
blobstore/cdd6fc8d-25c9-4cfb-9908-89eb0164fb80       running        australia-southeast1-a  medium.mem
compute/2dfcc046-c16a-4a36-9170-ef70d1881818         running        australia-southeast1-a  xlarge.disk
control/2f3d0bc6-9a2d-4c08-9ccc-a88bad6382a3         running        australia-southeast1-a  xlarge
database/da60f0e7-b8e3-4f8d-945d-306b267ac161        running        australia-southeast1-a  large.disk
mysql_monitor/a88331c4-1659-4fe4-b8e9-89ce4bf092fd   running        australia-southeast1-a  micro
router/276e308e-a476-4c8d-9555-21623dada492          running        australia-southeast1-a  micro

7 vms


** Few other examples **

- View all the deployments, in this example we just have the PAS small footprint tile installed so it only exists and no other bosh managed tiles xist

ubuntu@opsman-pcf:~/scripts$ bosh2 -e gcp deployments --column=name
Using environment 'y.y.y.y' as user 'director' (bosh.*.read, openid, bosh.*.admin,, bosh.admin)


1 deployments


- Run cloud check to check for issues

ubuntu@opsman-pcf:~/scripts$ bosh2 -e gcp -d cf-adee3657c74c7b9a8e35 cloud-check
Using environment 'y.y.y.y' as user 'director' (bosh.*.read, openid, bosh.*.admin,, bosh.admin)

Using deployment 'cf-adee3657c74c7b9a8e35'

Task 66

Task 66 | 04:20:52 | Scanning 7 VMs: Checking VM states (00:00:06)
Task 66 | 04:20:58 | Scanning 7 VMs: 7 OK, 0 unresponsive, 0 missing, 0 unbound (00:00:00)
Task 66 | 04:20:58 | Scanning 3 persistent disks: Looking for inactive disks (00:00:01)
Task 66 | 04:20:59 | Scanning 3 persistent disks: 3 OK, 0 missing, 0 inactive, 0 mount-info mismatch (00:00:00)

Task 66 Started  Fri Dec 29 04:20:52 UTC 2017
Task 66 Finished Fri Dec 29 04:20:59 UTC 2017
Task 66 Duration 00:00:07
Task 66 done

#  Type  Description

0 problems


More Information

Tuesday, 19 December 2017

Terminating a specific application instance using it's index number in Pivotal Cloud Foundry

I was recently asked how to terminate a specific application instance rather then terminate all instances using "cf delete".

We can easily using the CF REST API or even easier the CF CLI "cf curl" command which makes it straight forward to make REST based calls into cloud foundry as shown below.


Below assumes you already logged into PCF using the CF CLI

1. First find an application that has multiple instances

pasapicella@pas-macbook:~$ cf app pas-cf-manifest
Showing health and status for app pas-cf-manifest in org apples-pivotal-org / space development as

name:              pas-cf-manifest
requested state:   started
instances:         2/2
usage:             756M x 2 instances
last uploaded:     Sun 19 Nov 21:26:26 AEDT 2017
stack:             cflinuxfs2
buildpack:         client-certificate-mapper=1.2.0_RELEASE container-security-provider=1.8.0_RELEASE java-buildpack=v4.5-offline- java-main
                   java-opts jvmkill-agent=1.10.0_RELEASE open-jdk-like-jre=1.8.0_1...

     state     since                  cpu    memory           disk           details
#0   running   2017-12-16T00:11:27Z   0.0%   241.5M of 756M   139.9M of 1G
#1   running   2017-12-17T10:39:09Z   0.3%   221.3M of 756M   139.9M of 1G

2. Use a "cf curl" curl which uses the application GUID to determine which application to check all application instances and their current state

pasapicella@pas-macbook:~$ cf curl /v2/apps/`cf app pas-cf-manifest --guid`/instances
   "0": {
      "state": "RUNNING",
      "uptime": 293653,
      "since": 1513383087
   "1": {
      "state": "RUNNING",
      "uptime": 169591,
      "since": 1513507149

3. Now let's delete instance with index "1". Don't forget that PCF will determine the current desired state of the application is not the current state and will re-start the application instance very quickly

pasapicella@pas-macbook:~$ cf curl /v2/apps/`cf app pas-cf-manifest --guid`/instances/1 -X DELETE

Note: You won't get any output BUT you can verify it has done what you asked for by running the command at step #2 again

pasapicella@pas-macbook:~$ cf curl /v2/apps/`cf app pas-cf-manifest --guid`/instances
   "0": {
      "state": "RUNNING",
      "uptime": 293852,
      "since": 1513383087
   "1": {
      "state": "DOWN",
      "uptime": 0

If you run it again say 30 seconds later you should see your application instance re-started as shown below

pasapicella@pas-macbook:~$ cf curl /v2/apps/`cf app pas-cf-manifest --guid`/instances
   "0": {
      "state": "RUNNING",
      "uptime": 293870,
      "since": 1513383087
   "1": {
      "state": "STARTING",
      "uptime": 11,
      "since": 1513676947

pasapicella@pas-macbook:~$ cf curl /v2/apps/`cf app pas-cf-manifest --guid`/instances
   "0": {
      "state": "RUNNING",
      "uptime": 293924,
      "since": 1513383087
   "1": {
      "state": "RUNNING",
      "uptime": 45,
      "since": 1513676965

More Information

pasapicella@pas-macbook:~$ cf curl --help
   curl - Executes a request to the targeted API endpoint

   cf curl PATH [-iv] [-X METHOD] [-H HEADER] [-d DATA] [--output FILE]

   By default 'cf curl' will perform a GET to the specified PATH. If data
   is provided via -d, a POST will be performed instead, and the Content-Type
   will be set to application/json. You may override headers with -H and the
   request method with -X.

   For API documentation, please visit

   cf curl "/v2/apps" -X GET -H "Content-Type: application/x-www-form-urlencoded" -d 'q=name:myapp'
   cf curl "/v2/apps" -d @/path/to/file

   -H            Custom headers to include in the request, flag can be specified multiple times
   -X            HTTP method (GET,POST,PUT,DELETE,etc)
   -d            HTTP data to include in the request body, or '@' followed by a file name to read the data from
   -i            Include response headers in the output
   --output      Write curl body to FILE instead of stdout

Thursday, 23 November 2017

Taking Pivotal Cloud Foundry Small Footprint for a test drive

Pivotal Cloud Foundry (PCF) now has a small footprint edition. It features a deployment configuration with as few as 6 VMs. Review the documentation for download and installation instructions as follows

There was also a Pivotal blog post on this as follows:

As you can see from this image it's considerably smaller control plane that's obvious.

It is important to understand what the limitations of such an install are as per the docs link below.

Installing the small footprint looks identical from the Operations Manager UI in fact it's still labelled ERT and from the home page of Operations Manager UI your wouldn't even know you had the small footprint

If you dig a bit further and click on the "ERT tile" and then select "Resource Config" left hand link you will then clearly know it's the Small Footprint PCF install.

I choose to use internal MySQL database and if I didn't then it could be scaled back even more then the default 7 VM's I ended up with.

Lastly I was very curious to find out what jobs are placed on which service VM's. Here is what it looked like for me when I logged into bosh director and run some bosh CLI commands
ubuntu@ip-10-0-0-241:~$ bosh2 -e aws vms --column=Instance --column="Process State" --column=AZ --column="VM Type"
Using environment '' as user 'director' (bosh.*.read, openid, bosh.*.admin,, bosh.admin)

Task 73. Done

Deployment 'cf-a96683b17697c86b8c90'

Instance                                             Process State  AZ               VM Type
backup-prepare/16356c40-1f20-42f0-8f2e-de45549be797  running        ap-southeast-2a  t2.micro
blobstore/b6e22107-018b-425d-8fe4-ab47eeaf2c75       running        ap-southeast-2a  m4.large
compute/5439f18f-c842-40a2-b6f3-faf6b6848716         running        ap-southeast-2a  r4.xlarge
control/68979d93-d12b-4d87-b110-d3d41a48b261         running        ap-southeast-2a  r4.xlarge
database/a3efedaa-4df6-48f5-9f20-61cf3d9f3c1b        running        ap-southeast-2a  r4.large
mysql_monitor/d40ef638-d2d0-488e-b937-99a7f5b5b334   running        ap-southeast-2a  t2.micro
router/f9573547-c5a1-43d4-be02-38a1d9e9c73e          running        ap-southeast-2a  t2.micro

7 vms

ubuntu@ip-10-0-0-241:~$ bosh2 -e aws instances --ps --column=Instance --column=Process
Using environment '' as user 'director' (bosh.*.read, openid, bosh.*.admin,, bosh.admin)

Task 68. Done

Deployment 'cf-a96683b17697c86b8c90'

Instance                                                          Process
autoscaling-register-broker/9feaef45-994e-472c-8ca3-f0c39467dd6b  -
autoscaling/184df31d-a64c-49e0-8b6b-27eafdb31ca0                  -
backup-prepare/16356c40-1f20-42f0-8f2e-de45549be797               -
~                                                                 service-backup
blobstore/b6e22107-018b-425d-8fe4-ab47eeaf2c75                    -
~                                                                 blobstore_nginx
~                                                                 blobstore_url_signer
~                                                                 consul_agent
~                                                                 metron_agent
~                                                                 route_registrar
bootstrap/0dc22a1f-a1ee-4a03-85c6-fed08f37c44a                    -
compute/5439f18f-c842-40a2-b6f3-faf6b6848716                      -
~                                                                 consul_agent
~                                                                 garden
~                                                                 iptables-logger
~                                                                 metron_agent
~                                                                 netmon
~                                                                 nfsv3driver
~                                                                 rep
~                                                                 route_emitter
~                                                                 silk-daemon
~                                                                 vxlan-policy-agent
control/68979d93-d12b-4d87-b110-d3d41a48b261                      -
~                                                                 adapter
~                                                                 auctioneer
~                                                                 bbs
~                                                                 cc_uploader
~                                                                 cloud_controller_clock
~                                                                 cloud_controller_ng
~                                                                 cloud_controller_worker_1
~                                                                 cloud_controller_worker_local_1
~                                                                 cloud_controller_worker_local_2
~                                                                 consul_agent
~                                                                 doppler
~                                                                 file_server
~                                                                 locket
~                                                                 loggregator_trafficcontroller
~                                                                 metron_agent
~                                                                 nginx_cc
~                                                                 policy-server
~                                                                 reverse_log_proxy
~                                                                 route_registrar
~                                                                 routing-api
~                                                                 scheduler
~                                                                 silk-controller
~                                                                 ssh_proxy
~                                                                 statsd_injector
~                                                                 syslog_drain_binder
~                                                                 tps_watcher
~                                                                 uaa
database/a3efedaa-4df6-48f5-9f20-61cf3d9f3c1b                     -
~                                                                 cluster_health_logger
~                                                                 consul_agent
~                                                                 galera-healthcheck
~                                                                 gra-log-purger-executable
~                                                                 mariadb_ctrl
~                                                                 metron_agent
~                                                                 mysql-diag-agent
~                                                                 mysql-metrics
~                                                                 nats
~                                                                 route_registrar
~                                                                 streaming-mysql-backup-tool
~                                                                 switchboard
mysql-rejoin-unsafe/01a0aec3-b103-4c09-bc69-cabc61c513cc          -
mysql_monitor/d40ef638-d2d0-488e-b937-99a7f5b5b334                -
~                                                                 replication-canary
nfsbrokerpush/829f0292-59ab-4824-8b0b-c4af4bddbce0                -
notifications-ui/50972440-36cd-499d-ad7c-eef4df7e604b             -
notifications/968d625e-af63-4e2f-a59c-f6b789ef1cff                -
push-apps-manager/3d9760d7-6f09-453c-a052-32604b6a3235            -
push-pivotal-account/5a8782ad-82eb-4469-8242-f0873bc4a587         -
push-usage-service/9b781db1-171b-4abe-92f4-7445fd3d487f           -
router/f9573547-c5a1-43d4-be02-38a1d9e9c73e                       -
~                                                                 consul_agent
~                                                                 gorouter
~                                                                 metron_agent
smoke-tests/a8e2ff97-bae1-4594-90fc-ec8c430fd620                  -

77 instances


Now, with Small Footprint, you have yet another way to bring PCF to your organization!

Sunday, 19 November 2017

Using Spring Boot Actuator endpoint for Spring Boot application health check type on PCF

An application health check is a monitoring process that continually checks the status of a running Cloud Foundry application. When deploying an app, a developer can configure the health check type (port, process, or HTTP), a timeout for starting the application, and an endpoint (for HTTP only) for the application health check.

To use the HTTP option your manifest.yml would look like this

- name: pas-cf-manifest
  memory: 756M
  instances: 1
  hostname: pas-cf-manifest
  path: ./target/demo-0.0.1-SNAPSHOT.jar
  health-check-type: http
  health-check-http-endpoint: /health
  stack: cflinuxfs2
  timeout: 80
    NAME: Apples

Using a HTTP endpoint such as "/health" is possible once you add the Spring Boot Actuator maven dependency as follows


More Information

Saturday, 28 October 2017

Testing network connectivity from Cloud Foundry Application Instances

This app below simply tests whether a host:port is accessible from a CF app instance. For example can my application instance access my Oracle Database Instance running outside of PCF given application instances need network access to the database database for example.

You can use bosh2 ssh to get to the Diego Cells if you have access to the environment or even "cf ssh" if that has been enabled.

GitHub URL:


pasapicella@pas-macbook:~$ http
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 81
Content-Type: application/json;charset=UTF-8
Date: Wed, 25 Oct 2017 08:38:33 GMT
X-Vcap-Request-Id: 8fd05c77-f680-4966-558b-c45e71825fa0

    "errorMessage": "N/A",
    "hostname": "",
    "port": "80",
    "res": "SUCCESS"


pasapicella@pas-macbook:~$ http
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 110
Content-Type: application/json;charset=UTF-8
Date: Wed, 25 Oct 2017 11:52:18 GMT
X-Vcap-Request-Id: 91ad2359-7d95-49c6-4538-548e480b7820

    "errorMessage": "Connection refused (Connection refused)",
    "hostname": "",
    "port": "8080",
    "res": "FAILED"

Sunday, 22 October 2017

Just installed Pivotal Cloud Foundry, what's next should I login to Apps Manager?

I get this question often from customers. Pivotal Cloud Foundry has just been installed and the API endpoint to target the instance is working fine. In short we want to do the following before we get developers onto the platform to ensure we no longer using the UAA server admin login details from the CLI or Apps Manager UI.

  • Create a new ADMIN user which will be used to configure Apps Manager ORGS and spaces for the developers
  • Create an ORG
  • Create at least one Quota maybe more to control memory limit and application instances within an ORG
  • Assign the quota to your ORG

--> Create a new ADMIN user which will be used to configure Apps Manager ORGS and spaces for the developers

1. Login to Ops Manager VM using SSH for example
2. Target the UAA server as shown below

Eg: $ uaac target uaa.YOUR-DOMAIN

ubuntu@opsmanager-pcf:~$ uaac target uaa.system.YYYY --skip-ssl-validation
Unknown key: Max-Age = 2592000

Target: https://uaa.system.YYYY

3. Authenticate and obtain an access token for the admin client from the UAA server

Note: Record the uaa:admin:client_secret from your deployment manifest

ubuntu@opsmanager-pcf:~$ uaac token client get admin -s PASSWD

Successfully fetched token via client credentials grant.
Target: https://uaa.system.YYYY
Context: admin, from client admin

4. Use the uaac contexts command to display the users and applications authorized by the UAA server, and the permissions granted to each user and application. Ensure in the "scope" field that "scim.write" exists

ubuntu@opsmanager-pcf:~$ uaac contexts

  skip_ssl_validation: true

      client_id: admin
      access_token: .....
      token_type: bearer
      expires_in: 43199
      scope: password.write clients.secret clients.write uaa.admin scim.write
      jti: b1bf094a5c4640dbac4abc5f3bf15b08

5. Run the following command to create an admin user

ubuntu@opsmanager-pcf:~$ uaac user add apples -p PASSWD --emails
user account successfully added

6. Run uaac member add GROUP NEW-ADMIN-USERNAME to add the new admin to the groups cloud_controller.admin, uaa.admin,, and scim.write

ubuntu@opsmanager-pcf:~$ uaac member add cloud_controller.admin apples
ubuntu@opsmanager-pcf:~$ uaac member add uaa.admin apples
ubuntu@opsmanager-pcf:~$ uaac member add apples
ubuntu@opsmanager-pcf:~$ uaac member add scim.write apples

--> Create an ORG

1. Login using the new admin user "apples"

pasapicella@pas-macbook:~$ cf login -u apples -p PASSWD -o system -s system
API endpoint: https://api.system.YYYY

Targeted org system

Targeted space system

API endpoint:   https://api.system.YYYY (API version: 2.94.0)
Org:            system
Space:          system

2. Create an ORG as follows

pasapicella@pas-macbook:~$ cf create-org myfirst-org
Creating org myfirst-org as apples...

Assigning role OrgManager to user apples in org myfirst-org ...

TIP: Use 'cf target -o "myfirst-org"' to target new org

--> Create at least one Quota maybe more to control memory limit and application instances within an ORG

1. Here we create what I call a medium-quota which allows 20G of memory, 2 service instances, each application instance can be no more then 1G of memory and only 20 Application Instances can be created using this quota.

pasapicella@pas-macbook:~$ cf create-quota medium-quota -m 20G -i 1G -a 20 -s 2 -r 1000 --allow-paid-service-plans
Creating quota medium-quota as apples...

pasapicella@pas-macbook:~$ cf quota medium-quota
Getting quota medium-quota info as apples...

Total Memory           20G
Instance Memory        1G
Routes                 1000
Services               2
Paid service plans     allowed
App instance limit     20
Reserved Route Ports   0

--> Assign the quota to your ORG

1. Assign the newly created quota to the ORG we created above

pasapicella@pas-macbook:~$ cf set-quota myfirst-org medium-quota
Setting quota medium-quota to org myfirst-org as apples...

pasapicella@pas-macbook:~$ cf org myfirst-org
Getting info for org myfirst-org as apples...

name:                 myfirst-org
quota:                medium-quota
isolation segments:

Finally we can add a space to the ORG and assign privileges to a user called "pas" as shown below

- Set OrgManager role to the user "pas"

pasapicella@pas-macbook:~$ cf set-org-role pas myfirst-org OrgManager
Assigning role OrgManager to user pas in org myfirst-org as apples...

- Logout as "apples" admin user as "pas" can now do his own admin for the ORG " myfirst-org"

pasapicella@pas-macbook:~$ cf logout
Logging out...

- Login as pas and target the ORG

pasapicella@pas-macbook:~$ cf login -u pas -p PASSWD -o myfirst-org
API endpoint: https://api.system.YYYY

Targeted org myfirst-org

API endpoint:   https://api.system.YYYY (API version: 2.94.0)
User:           pas
Org:            myfirst-org
Space:          No space targeted, use 'cf target -s SPACE'

- Create a space which will set space roles for the user "pas"

pasapicella@pas-macbook:~$ cf create-space dev
Creating space dev in org myfirst-org as pas...
Assigning role RoleSpaceManager to user pas in org myfirst-org / space dev as pas...
Assigning role RoleSpaceDeveloper to user pas in org myfirst-org / space dev as pas...

TIP: Use 'cf target -o "myfirst-org" -s "dev"' to target new space

- Target the new space

pasapicella@pas-macbook:~$ cf target -o myfirst-org -s dev
api endpoint:
api version:    2.94.0
user:           pas
org:            myfirst-org
space:          dev

Typically we would assign other users to the spaces using "cf set-space-role .."

pasapicella@pas-macbook:~$ cf set-space-role --help
   set-space-role - Assign a space role to a user

   cf set-space-role USERNAME ORG SPACE ROLE

   'SpaceManager' - Invite and manage users, and enable features for a given space
   'SpaceDeveloper' - Create and manage apps and services, and see logs and reports
   'SpaceAuditor' - View logs, reports, and settings on this space


More Information

Creating and Managing Users with the UAA CLI (UAAC)

Creating and Managing Users with the cf CLI

Thursday, 5 October 2017

Pivotal Cloud Foundry 1.12 on Google Cloud Platform with VM labels

Once PCF is installed on GCP it's worth noting that viewing the "Compute Engine" labels gives you as indication of what VM each CF service is associated with. The screen shots below show's this.

Monday, 25 September 2017

Updating Cloud Foundry CLI using Brew

Need to upgrade the CF CLI using brew it's as simple as below. Go to love brew

pasapicella@pas-macbook:~$ brew upgrade cf-cli
==> Upgrading 1 outdated package, with result:
cloudfoundry/tap/cf-cli 6.31.0
==> Upgrading cloudfoundry/tap/cf-cli
Warning: Use cloudfoundry/tap/cloudfoundry-cli instead of deprecated pivotal/tap/cloudfoundry-cli
==> Downloading
==> Downloading from
######################################################################## 100.0%
==> Caveats
Bash completion has been installed to:
==> Summary
🍺  /usr/local/Cellar/cf-cli/6.31.0: 6 files, 17.6MB, built in 16 seconds

pasapicella@pas-macbook:~$ cf --version
cf version 6.31.0+b35df905d.2017-09-15

Friday, 15 September 2017

Using Cloud Foundry CUPS to inject Spring Security credentials into a Spring Boot Application

The following demo shows how to inject the Spring Security username/password credentials from a User Provided service on PCF, hence using the VCAP_SERVICES env variable to inject the values required to protect the application using HTTP Basic Authentication while running in PCF. Spring Boot automatically converts this data into a flat set of properties so you can easily get to the data as shown below.

The demo application can be found as follows

The application.yml would access the VCAP_SERVICES CF env variable using the the Spring Boot flat set of properties as shown below.



  "user-provided": [
    "credentials": {
     "password": "myadminpassword",
     "username": "myadminuser"
    "label": "user-provided",
    "name": "my-cfcups-service",
    "syslog_drain_url": "",
    "tags": [],
    "volume_mounts": []


    name: security-cf-cups-demo
    name: ${}
    password: ${}