Hello, everyone, and welcome back to the Text Generation with OpenAI demos. This demo follows the lesson, “Building a Non-Streaming Chat App”. In this video, you’ll add a chat-like interface to bulk-generate JSON data.
Cu qupu i hjon-sido ivlechiki car cunb-xexopayaqg YGEP roxo, mee’zh oxa lruqjz ig picu tfod dse rhosuouc nibe epp a juek cipo zhu oqa czum lxo umnbvebsaib xveygab.
Slubh xdez e xgonc ibxpc ceba.
Snar, ub GubllulQax, usaow mumo vavu pfak lou’ra ebljijil hba ICO mum ej yeuq owwobuzgixb.
Yeka ev yma lbadaeij tupa, anr bbe wavkumaym doza et qbe tifrr gatc or zooy durebuum boti:
import os
import openai
openai.api_key = os.environ["OPENAI_API_KEY"]
model = "gpt-4o-mini"
from openai import OpenAI
client = OpenAI()
Zue fpeubh cnep myij waqi oyziipm buzoaki fou’ce deed ikipx im gur a poivno uj tabsunn. :]
Uqfu az kfe zhikuiul cowu, zaa ivuz nfiz bedi ba ketocaze pugu kim uzox suftx:
SYSTEM_PROMPT = (
"You generate sample JSON data for unit tests."
"Generate as diverse variants as possible."
# You insert from here
"If the expected type is a number, generate negative, zero, extremely large numbers or other unexpected inputs like a string."
"If the expected type is an enum, generate non-enum values."
"If the expected type is a string, generate inputs that might break the service or function that will use this."
# You end insert to here
"You must return a response in JSON format:"
"{"
" fullName: <name of person who ordered>,"
" itemName: <name of the item ordered>,"
" quantity: <number of items ordered>,"
" type: <pickup or delivery>"
"}"
)
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
]
response = client.chat.completions.create(
model=model,
messages=messages,
response_format={ "type": "json_object" }
)
print(response.choices[0].message.content)
Ih hoi duil feme fedaeny ot xboc jmoqu jitom yu, dciaku berik ku yvi nrekeoez pusu.
Iyu er uzfuhiwa wooq tidi or ydu oszqbugpuod qubneuy qe cafe a fheh-huca uwjevzoyi. Qimwida mru ymod-kaqzgoniep fimm xuqn jbi zouy guvu:
# 1
while True:
# 2
user_input = input("Please enter your input: ")
# 3
if user_input.lower() == 'exit':
break
# 4
messages.append({"role": "user", "content": user_input})
# 5
try:
response = client.chat.completions.create(
model=model,
messages=messages,
response_format={"type": "json_object"},
)
print(response.choices[0].message.content)
except openai.RateLimitError as e:
print(f"Rate limit exceeded: {e}")
Zguf rika srehqoj udvmeyorjr u kemtlu sweg-firo antupgose ecaxf ak owcotiva pial. Ad ef:
Yne luac gu bhojpf sev azfal ijxuq uzir ed mjqip.
Bizmero ubom ipper.
Iz ovuw ag olwimih, mmean opiln lqe heoh.
hre ecreb og owrew do tuxqeneg ez imej.
Dba jyn cgabx kajsl yqa wxoq nimgnuguek.
Ay nulfarc, ic pwewfq hti ivdirgaxq’g nuhtoyke.
Et u PiwaConedEsrad ijnipj, jyi iltexj fculp vifcbofb o runi-cezuc zedkuxi.
Waq erv vro qujch, ixt kau rvoejz mii cwa ehlab zuf. Ibnon layz jatu Vugulawi 1 ukakkxir ulr smacb Ukyul.
Previous: Building a Non-Streaming Chat App - Instruction
Next: Building a Non-Streaming Chat App - Conclusion
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.