Introduction
flowchart LR
enter["Enter Tyler's Portfolio"]
tabs["Click on the tabs"]
cows["See the cows"]
enjoy["Enjoy the site!"]
enter --> tabs --> cows --> enjoy --> tabs
___________________________________________
| Welcome to Tyler's Portfolio! ... I'm a cow |
===========================================
\
\
^__^
(oo)\_______
(__)\ )\/\
||----w |
|| ||
____________________________
( This is a javascript cowsay )
( cowsay is available everywhere.. )
----------------------------
o ^__^
o (oo)\_______
(__)\ )\/\
||----w |
|| ||
___________________________________
/ This is the classic Bash cowsay - \
\ (Where did the cow go?) /
-----------------------------------
\
\
.--.
|o_o |
|:_/ |
// \ \
(| | )
/'\_ _/`\
\___)=(___/
_____
< zzz >
-----
\ ^__^
\ (--)\_______
(__)\ )\/\
||----w |
|| ||
Welcome!
Below is a representation of projects I have developed. I handled lead position for development and deployment. This allowed me to coordinate between teams and complete projects efficiently. To see more details about each one, click on the title in the list below.
By the way, while reading about my projects, you will see formatted text. Click on them to see relevant code samples or diagrams.
That’s all! Enjoy! If you would like to get in contact, feel free to reach me at
-
OwlChirp
Engineered using: Python, Javascript, SQL, AWS Infrastructure and Async Programming
Handled lead position in project to build an AWS Connect phone client with collaborative and realtime features. Handled multi-departmental coordination for deployment, security and development. In usage by helpdesk team and has received praise for providing needed features.
-
HelpDesk Dashboard
Engineered using: PowerShell
Designed and implemented software to automate common tickets. Coordinated with helpdesk team for feature requests and handling submitted issues. Software generates reports, including user-tailored actions, user device discovery for remote control, and dynamically generated links (such as AAD user profile or log searches). Noted for reducing error rate and speeding up ticket resolution.
-
PowerShell ToolBox
Engineered using: PowerShell and API Development
Led project in refactoring the HelpDesk Dashboard project to address additional requirements. Several dozen single-focus tools were implemented during this project. These tools were integrated into the HelpDesk Dashboard software as well as being used for other controller script programs.
-
HelpDesk Analytics
Engineered using: Python, API and AWS Infrastructure
Assigned lead to engineer project to generate statistical data on department performance. Coordinated with AWS Support, deployment and Connect call flow implementation team. Powered by AWS infrastructure, pulls data from S3 to generate critical reports.
OwlChirp
Diagram
flowchart LR
user>"User calls in"]
connect>"Routed through AWS Connect"]
owlchirp((("OwlChirp")))
answer["Call is answered"]
realtime["Realtime Metrics"]
notifications["HTML5 notifications sent to agent and team"]
user --> connect --> owlchirp ==> answer
subgraph collaboration["Collaboration"]
direction TB
owlchirp <--> realtime
owlchirp <--> notifications
end
While working with my team, I realized the phone client provided by AWS was insufficient. I didn’t know who had recently called me, nor did I know what my teammates were doing. Fundamental questions like “Are there several callers in the queue” was not available. Inspired by Raymond Hettinger’s speech on idiomatic Python, I decided to create an improved version.
At its core, OwlChirp utilizes the amazon-connect-streams library and provides collaborative features to each user. Without needing to reimplement core logic, I was free to use available hooks to add in extra features.
I placed in a server-backed recent call list, computed realtime metrics, included HTML5 notifications for custom events, and other quality of life features. With this, the team was now able to collaborate and balance resources to calls or ticket’s intelligently. The software automatically determines if assistance is needed, and sends relevant notifications when necessary.
Server Sent Events (SSE)
Diagram, Python, Javascript
stateDiagram-v2
direction LR
server: OwlChirp Server
SSE: SSE Subscription
sleep: Async Sleep
client: OwlChirp Client
poll: Poll for update
pageUpdate: Update page
state server {
direction TB
poll --> AWS
AWS --> sleep
sleep --> poll
}
state client {
direction TB
Event --> pageUpdate
pageUpdate --> Event
}
server --> SSE
server --> sleep
SSE --> client
OwlChirp provides real-time updates to each user. These are delivered through Server Sent Events when an update is available. This results in low network traffic, as traffic only occurs when there are updates.
export async function eventSub(endpoint) {
let events = {
'queue_count': [realtimeUpdateQueueCount],
'available_count': [realtimeUpdateAvailableCount, realtimeUpdateVisualAgentList],
'handled_incoming': [realtimeUpdateHandledIncoming],
}
let subObj = await asyncSubscribe(API + `${endpoint}`, (r) => {
let data = JSON.parse(r.data)
for (let [key, _] of Object.entries(data)) {
if (events.hasOwnProperty(key)) {
for (let event of events[key]) {
event(data);
}
}
}
})
return subObj
}
The client subscribes to events it is interested in, and when an event occurs, the relevant components are updated.
def get_data_generator(self, server_sent_event=False) -> AsyncGenerator[bytes | Any, Any]:
"""
Returns a generator that returns all data on each change.
"""
async def obj(func):
previous_result = None
while True:
value = await func()
if value != previous_result:
previous_result = value
if server_sent_event is True:
value = json.dumps(value)
event = ServerSentEvent(value)
yield event.sse
else:
yield value
await asyncio.sleep(1)
return obj(self._get_data)
The server utilizes python generators
to poll the backend API’s on demand.
If no one is requesting data, the server does not poll the API.
With this data, results are cached and shared among interested users.
This limits the number of queries to the backend API and reduces redundant traffic.
Deployment
Diagram, Shell, Text
flowchart LR
commit["Git commit"] --> github["GitHub Actions"]
github --> image["Build Docker Image"] --> repo["Push to Docker repo"]
github --> deploy["Push to Docker Swarm"]
When designing each of my programs, I made it a priority to have a commit to deployment pipeline. Having CI/CD allowed me to focus on the code, deploy bug fixes quicker, and have a consistent deployment process.
FROM python:3.10-slim AS init
[...]
FROM init AS build_server_environment
[...]
FROM caddy:2.6.2-builder AS build_reverse_proxy
[...]
FROM init AS production
COPY --from=build_reverse_proxy /opt/reverse_proxy /opt/reverse_proxy
COPY --from=build_client_environment /build/dist /app/server/static/dist
COPY --from=build_server_environment /usr/local /usr/local
To begin, I use a Dockerfile to write out the build process. Using multi-stage builds encourages logical separation and permits secrets management. Additionally, this also allows for reduced image sizes, as only required files make it into the final image.
mkdir -p ~/.ssh
echo "${{ secrets.REMOTE_SERVER_PRIVATE_KEY }}" > ~/.ssh/id_rsa
chmod 400 ~/.ssh/id_rsa
echo -e "Host *\n StrictHostKeyChecking no" > ~/.ssh/config
docker stack deploy --with-registry-auth -c deploy/docker-compose.yml ${{ env.REPO_NAME }}
First the image is built and pushed to the Docker repository. Then, it is deployed to the Docker Swarm instance, which orchestrates the deployment.
This deployment approach has been effective for me, and I use it for most projects.
HelpDesk Dashboard
$CheckMFA = {
$AZURE_MFA_groups = $current_user.AzureADUserGroups | Where-Object {
$_.DisplayName -eq $Global:config.checks.MFA.Enrolled -or
$_.DisplayName -eq $Global:config.checks.MFA.Notification
}
if (($AZURE_MFA_groups | Where-Object { $_.DisplayName -eq $Global:config.checks.MFA.Enrolled })) {
$msg = "Is enrolled in MFA "
Write-Host -NoNewline $msg "".PadLeft($this.MSG_PAD - $msg.length)
Write-Host @greenCheck
return $true
}
}
elseif ($AZURE_MFA_groups | Where-Object { $_.DisplayName -eq $Global:config.checks.MFA.Notification }) {
$msg = "MFA not setup "
Write-Host -NoNewLine $msg "".PadLeft($this.MSG_PAD - $msg.length)
Write-Host @informationSign
return -1
}
elseif ($AZURE_MFA_groups.length -eq 0) {
$msg = "Not in any MFA groups"
Write-Host -NoNewLine $msg "".PadLeft($this.MSG_PAD - $msg.length)
Write-Host @redX
return $false
}
}
$task = @{
Name = "MFA Check"
Description = "Verify user has correct MFA groups"
Priority = 50
Group = "Staff"
Function = $CheckMFA
}
$check_list.add($task) | out-null
PowerShell
When I was first hired, it was during the busy season. We were dealing with a high volume of tickets and calls each day. As I completed each incident, I realized that the majority of calls and tickets were resolved through a predictable number of things to check. For example, for account login issues:
- Is their account locked out?
- Is their account enabled?
- Are they permitted to login from this device?
- Are they enrolled in Multi-Factor Authentication (MFA)?
Unfortunately, the process to resolve each ticket was manual. There were tricks to do it quicker, but I fatigued
from doing the same dozens of steps repeatedly for each incident.
I was also making mistakes and accidentally skipping steps intermittently. I felt like there had to be a better way!
I decided to engineer a solution that would automate my most common steps. To do this, I thought about the higher level
goal, such as “Are they enrolled in MFA?” and broke it down into repeatable steps
I could write into code.
With this new tool, I was able to debug common calls and tickets faster than anyone on the team.
To democratize this improved productivity, I shared my tool with my team, which magnified the impact my code had. We
were now able to handle many more tickets with less effort and fewer errors.
Checks
Text
Account is disabled X
Account does not expire √
Account is not locked √
Is enrolled in MFA √
Each check is an isolated test to determine the status of a specific part of a user’s account. To communicate this
to the user, a short status message is provided. Additionally, one colorized character out of three are returned: X, √ or !
This succinct communication quickly identified to me when there were issues, as well as including extra information when needed.
============
Summary
============
Account enabled X
Account Expiration √
Account lock √
MFA Check √
To simplify things more, at the bottom of the report is a colorized summary of the results, flashing red, yellow or white depending on the outcome.
PowerShell ToolBox
flowchart LR
credential["Credential Management"]
api["API Queries"]
recent_device["Recent Device Lookup"]
password_reset["Password Reset"]
tlDashboard(("HelpDesk-Dashboard"))
simpleRemoteControl(("Simple Remote Control"))
emailSearch(("Email Search"))
baseLayer --> middleLayer
subgraph modules["Modules"]
direction TB
subgraph baseLayer["Base Layer"]
direction TB
credential
api
end
subgraph middleLayer["Middle Layer"]
direction TB
recent_device
password_reset
end
end
modules --> tlDashboard
modules --> simpleRemoteControl
modules --> emailSearch
Diagram, PowerShell
function Get-CredFromFile {
[CmdletBinding()]
param (
[Parameter()]
[String]$Service,
[Switch]$ResetCredential
)
[...]
}
function Get-Query {
[CmdletBinding()]
param (
[Parameter(Position=0, ParameterSetName="Predefined")]
[ValidateSet("RecentDevices",
"WifiSearch")]
[String]$Predefined,
[...]
)
[...]
}
function Get-RecentDevices {
[CmdletBinding()]
Param (
[Parameter(ValueFromPipeline)]
[Microsoft.ActiveDirectory.Management.ADUser]$ADUser,
[...]
)
[...]
}
function Set-ADPassword {
param (
[OutputType([Void], ParameterSetName=("Set-ADUser"))]
[Parameter(Position=0, ValueFromPipeline)]
[Microsoft.ActiveDirectory.Management.ADUser[]]$ADUser,
[...]
)
[...]
}
After building the HelpDesk Dashboard project, I realized that I would like to reuse the functionality inside
the program, but couldn’t as it was tightly coupled. To resolve this, I began a new design approach that focused on
unix style module development. Focused tools
that do one thing well, that can be reused within the toolbox to incorporate
more functionality.
To do this, I had to implement low level tools to handle basic functionality such as API authorization and
Active Directory search in a reusable format.
With this work done, I was free to build on top
of this lower level to create
useful tools, such as a better designed HelpDesk Dashboard, email search, and a simplified GUI Remote Control interface.
Remote Control
function Get-RemoteControl {
[...]
$user = Get-ADUser
$devices = $user | Get-RecentDevices
& $cmrcViewer $devices[0]
}
function Get-RecentDevices {
$recentLogonQuery = Get-Query -Service loggingPlatform
foreach ($item in $recentLogonQuery.output) {
$sourcename = ($item.message.source -split '\.')[0]
$createtime = [datetime]::Parse($item.message.EventReceivedTime)
$obj = [PSCustomObject]@{
ResourceName = $sourcename
LastActive = $createtime
Type = "LoggingPlatform"
}
[...]
}
function Get-Query {
[...]
Get-Cred -Service $Service
$params = @{
Uri = "https://logger.domain.tld/api/views/search/$($createQuery.id)/execute"
Method = 'POST'
Headers = $headers
WebSession = $WebSession
ContentType = 'application/json'
}
$executedQuery = Invoke-RestMethod @params
}
PowerShell
A common issue is to “connect to the device the user is using”. Available resources, such as SCCM proved inconsistent
as it would report “last used” to be months ago. I felt like there had to be a better way to do it.
I realized that our logging platform would report relevant events within about two minutes of occurrence.
This was fast enough. I decided to build my approach to use this data to determine a users active device.
At the top level, I use Get-RemoteControl
. This is a simple tool to be used with Get-RecentDevices
.
Get-RecentDevices
needs to collect information from multiple services to determine the answer. To accomplish this,
it goes through a series of calls to lower level tools.
With this information,
it sends it upstream and then SCCM is used to finally initiate the remote control.
HelpDesk Analytics
Diagram, Text
When I first began at my job, there was a business need to gain insight of call metrics.
It was possible to view individual logs, but there was no comprehensive method of automatically parsing this data.
The team needed this feature, and our paid support was only available for sharing blog posts and providing guidance
for system implementation.
flowchart LR
incomingCall>"User calls in"]
logGenerate["Log generation"]
jsonTransform["Compute JSON properties"]
loadData["Load from s3"]
charts["View charts"]
s3["Load to s3"]
incomingCall --> processCall
subgraph processCall["AWS processing"]
direction TB
logGenerate --> jsonTransform
jsonTransform --> s3
end
subgraph HelpDesk-Analytics
direction TB
loadData --> charts
end
processCall --> HelpDesk-Analytics
To resolve this issue, I was assigned lead to engineer a solution. During the development, I had to learn how to interact with backend API’s, handle larger datasets and how to parse them into meaningful data. Data durability was a requirement as well, as historical data reports would be needed.
- name: Deploy to EB
uses: einaregilsson/beanstalk-deploy@v20
with:
aws_session_token: ${{ github.event.inputs.session_token }}
aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
application_name: ${{ secrets.EB_APP_NAME }}
environment_name: ${{ secrets.EB_ENV_NAME }}
version_label: helpdesk-analytics-source-${{ github.run_number }}
use_existing_version_if_available: true
region: ${{ secrets.AWS_REGION }}
existing_bucket_name: ${{ secrets.EB_BUCKET }}
wait_for_deployment: false
deployment_package: deploy.zip
Finally, deployment was handled in the standard method, with the exception that it was deployed to AWS Elastic Beanstalk. Later on, this deployment method was removed and I migrated the deployment to Docker Swarm.
AWS Infrastructure
def lambda_handler(event, context):
output = list()
for record in event['records']:
payload = base64.b64decode(record['data'])
raw_json = json.loads(payload)
formatted_json = actions_to_take(raw_json)
payload = json.dumps(formatted_json)
payload = payload.encode()
outputRecord = {
'recordId': record['recordId'],
'result': 'Ok',
'data': base64.b64encode(payload),
}
output.append(outputRecord)
returnValue = {'records': output}
return returnValue
Python
The first step was to get data into a manageable format. I elected to export this data to AWS S3 in JSON format after transforming the output with a lambda function. With this, I was able to use Python’s boto3 library to retrieve the files. To reduce network traffic and reduce costs, I organized the files into the recommended YYYY/MM/DD format.
This design choice was created out of a need to have future-proof storage of data, allowing other programs later on to load this data to analyze.
Eventually, HelpDesk Analytics was deprecated in favor of another solution I created. However, due to this design, I was able to reuse the record storage and import all stored data into the new system!