BEGIN:VCALENDAR VERSION:2.0 PRODID:-//chikkutakku.com//RDFCal 1.0//EN X-WR-CALDESC:GoogleカレンダーやiCalendar形式情報を共有シェ アしましょう。近所のイベントから全国のイベントま で今日のイベント検索やスケジュールを決めるならち っくたっく X-WR-CALNAME:ちっくたっく X-WR-TIMEZONE:UTC BEGIN:VEVENT SUMMARY:CSC4019Z-Research Methods DTSTART;VALUE=DATE-TIME:20250813T070000Z DTEND;VALUE=DATE-TIME:20250813T080000Z UID:477262748323 DESCRIPTION:CSC4019Z-Research Methods LOCATION:CS203 END:VEVENT BEGIN:VEVENT SUMMARY:NO SEMINAR SCHEDULED DTSTART;VALUE=DATE-TIME:20250814T110000Z DTEND;VALUE=DATE-TIME:20250814T115000Z UID:615062722224 DESCRIPTION:Seminars do not happen every week. Check your UCT mail to find out if one is scheduled. This time is just block out on the calendar for scheduling purposes LOCATION:CS2A END:VEVENT BEGIN:VEVENT SUMMARY:Honours project week 6 DTSTART;VALUE=DATE:20250810 DTEND;VALUE=DATE:20250815 UID:246399361772 DESCRIPTION:Honours project week 6 LOCATION: END:VEVENT BEGIN:VEVENT SUMMARY:School of IT seminar slot (compulsory if scheduled) DTSTART;VALUE=DATE-TIME:20250821T110000Z DTEND;VALUE=DATE-TIME:20250821T115000Z UID:196155585730 DESCRIPTION:Title: Meta-Learning the Intrinsic Reward Weighting in Curiosi ty-Driven RL\n\nAbstract: Reinforcement learning agents must find a balanc e between\nexploitation and exploration to maximise the cumulative sum of\ nextrinsic rewards. However\, it is particularly challenging for agents\nt o explore effectively in sparse reward environments where feedback is\nrar e. Curiosity-driven algorithms can be used to encourage effective\nexplora tion. These algorithms generate an additional reward called the\nintrinsic reward that encourages agents to seek novel situations. The\nintrinsic re wards are combined with extrinsic rewards using a weighted\nsum\, where λ is the weighting for the intrinsic reward. However\, λ is\noften fine-tu ned for each new environment\, even when environments are\nsimilar. We pro pose a meta-learning approach that replaces the fixed λ\nparameter with a neural network that outputs λ values at each time\nstep. This network is trained using evolutionary strategies and can\ngeneralise across similar environments without retraining. Our\napproach highlights the potential fo r reducing the need for\nfine-tuning λ across similar tasks.\n\nBiography : Batsi is a Master's student at the University of Cape Town\nwith interes ts in curiosity-driven reinforcement learning and\nmeta-reinforcement lear ning. His research focuses on improving the\nsample efficiency of reinforc ement learning algorithms. LOCATION:CS2A END:VEVENT BEGIN:VEVENT SUMMARY:Honours project week 7 DTSTART;VALUE=DATE:20250817 DTEND;VALUE=DATE:20250822 UID:229349004623 DESCRIPTION:Honours project week 7 LOCATION: END:VEVENT BEGIN:VEVENT SUMMARY:Honours project week 8 DTSTART;VALUE=DATE:20250824 DTEND;VALUE=DATE:20250829 UID:275075003023 DESCRIPTION:Honours project week 8 LOCATION: END:VEVENT BEGIN:VEVENT SUMMARY:Venue in use: AIFMRM DTSTART;VALUE=DATE-TIME:20250728T220000Z DTEND;VALUE=DATE-TIME:20250831T220000Z UID:274559703829 DESCRIPTION:Venue in use: AIFMRM LOCATION:CS203 END:VEVENT BEGIN:VEVENT SUMMARY:Honours project week 9 DTSTART;VALUE=DATE:20250831 DTEND;VALUE=DATE:20250905 UID:325068959762 DESCRIPTION:Honours project week 9 LOCATION: END:VEVENT BEGIN:VEVENT SUMMARY:EMPTY PLACE HOLDER FOR VENUE CHANGE NOTIFICATION: 11 -12 on Thursd ays only DTSTART;VALUE=DATE-TIME:20250730T220000Z DTEND;VALUE=DATE-TIME:20250910T220000Z UID:324737483947 DESCRIPTION:EMPTY PLACE HOLDER FOR VENUE CHANGE NOTIFICATION: 11 -12 on Th ursdays only LOCATION:CENG SEM END:VEVENT END:VCALENDAR