Joe Badaczewski

Senior Software Engineer at SoftWriters

MS in Multimedia Technology + BA in Digital Media Arts from Duquesne University





Projects

end

active development

Interplanetary macroeconomic simulator with RPG and turn-based strategy elements


Built with: TypeScript, React, React Native, Nx, Three.js, React Three Fiber, NestJS, WatermelonDB, MongoDB

Deployed using: Docker and Fly.io

Principles: Cross-platform (web + native mobile), offline-first




act

active development

General purpose achievement tracker


Built with: TypeScript, React, React Native, Nx, NestJS, WatermelonDB, CouchDB, Keycloak

Principles: Self-hostable with Docker Compose, Cross-platform (web + native mobile), offline-first





Activity

Programming on Github


May 5th end working examples of navigating between tiles with a lot of edge cases covered (crossing over polar maxes + traveling through the center of the hexasphere + rotating numbers to face camera) [hexasphere]


View this commit on Github


Rating movies using Showly


Jan 12th Inception


My ratings


Continuing education and taking notes



Today I continued the above course on LLMs and studied a topic called "in-context learning" (ICL). ICL is a part of prompt engineering. Prompt engineering is the process of training an LLM by providing clear instructions and guidance to receive more desirable output. Engineers use ICL to train a model on a specific domain by providing examples of successful outcomes directly in the prompt. The goal is to engineer the prompts to encourage the LLM to learn by example. However, prompt engineering has to be cognizant of the "context window". Using ICL is restricted by the size of the "context window", which is sometimees only a couple thousand words.


For my personal project end (which at its core is a turn based strategy game similar to Risk on an interplanetary scale), I can envision using an LLM to determine the overall "winner" of a match. To clarify, each match in end will have a multi faceted outcome that looks at resource usage and overall soldier count to determine a set of different "outcomes" that could have far reaching effects on several different planetary systems. By using "multi shot inference" to train a model, I can provide multiple basic example "game winners" from other matches to help the LLMs learn what a "winner" truly looks like in the context of a multi dimensional gaming system.


View this note on GitHub